00:00:00.001 Started by upstream project "autotest-per-patch" build number 126204 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.040 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.040 The recommended git tool is: git 00:00:00.041 using credential 00000000-0000-0000-0000-000000000002 00:00:00.042 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.059 Fetching changes from the remote Git repository 00:00:00.061 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.079 Using shallow fetch with depth 1 00:00:00.079 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.079 > git --version # timeout=10 00:00:00.114 > git --version # 'git version 2.39.2' 00:00:00.114 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.167 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.167 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.602 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.613 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.624 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.624 > git config core.sparsecheckout # timeout=10 00:00:03.635 > git read-tree -mu HEAD # timeout=10 00:00:03.650 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.670 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.670 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.763 [Pipeline] Start of Pipeline 00:00:03.777 [Pipeline] library 00:00:03.778 Loading library shm_lib@master 00:00:03.778 Library shm_lib@master is cached. Copying from home. 00:00:03.791 [Pipeline] node 00:00:03.797 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:03.799 [Pipeline] { 00:00:03.808 [Pipeline] catchError 00:00:03.809 [Pipeline] { 00:00:03.821 [Pipeline] wrap 00:00:03.830 [Pipeline] { 00:00:03.838 [Pipeline] stage 00:00:03.841 [Pipeline] { (Prologue) 00:00:03.859 [Pipeline] echo 00:00:03.860 Node: VM-host-SM9 00:00:03.865 [Pipeline] cleanWs 00:00:03.873 [WS-CLEANUP] Deleting project workspace... 00:00:03.873 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.879 [WS-CLEANUP] done 00:00:04.076 [Pipeline] setCustomBuildProperty 00:00:04.154 [Pipeline] httpRequest 00:00:04.176 [Pipeline] echo 00:00:04.178 Sorcerer 10.211.164.101 is alive 00:00:04.188 [Pipeline] httpRequest 00:00:04.193 HttpMethod: GET 00:00:04.194 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.194 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.195 Response Code: HTTP/1.1 200 OK 00:00:04.195 Success: Status code 200 is in the accepted range: 200,404 00:00:04.196 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.631 [Pipeline] sh 00:00:04.908 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.925 [Pipeline] httpRequest 00:00:04.940 [Pipeline] echo 00:00:04.942 Sorcerer 10.211.164.101 is alive 00:00:04.948 [Pipeline] httpRequest 00:00:04.952 HttpMethod: GET 00:00:04.952 URL: http://10.211.164.101/packages/spdk_72fc6988fe354a00b8fe81f2b1b3a44e05925c76.tar.gz 00:00:04.953 Sending request to url: http://10.211.164.101/packages/spdk_72fc6988fe354a00b8fe81f2b1b3a44e05925c76.tar.gz 00:00:04.957 Response Code: HTTP/1.1 200 OK 00:00:04.957 Success: Status code 200 is in the accepted range: 200,404 00:00:04.958 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_72fc6988fe354a00b8fe81f2b1b3a44e05925c76.tar.gz 00:00:42.583 [Pipeline] sh 00:00:42.863 + tar --no-same-owner -xf spdk_72fc6988fe354a00b8fe81f2b1b3a44e05925c76.tar.gz 00:00:46.164 [Pipeline] sh 00:00:46.442 + git -C spdk log --oneline -n5 00:00:46.442 72fc6988f nvmf: add nvmf_update_mdns_prr 00:00:46.442 97f71d59d nvmf: consolidate listener addition in avahi_entry_group_add_listeners 00:00:46.442 719d03c6a sock/uring: only register net impl if supported 00:00:46.442 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:46.442 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:46.460 [Pipeline] writeFile 00:00:46.478 [Pipeline] sh 00:00:46.754 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:46.767 [Pipeline] sh 00:00:47.045 + cat autorun-spdk.conf 00:00:47.045 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.045 SPDK_TEST_NVMF=1 00:00:47.045 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.045 SPDK_TEST_USDT=1 00:00:47.045 SPDK_TEST_NVMF_MDNS=1 00:00:47.045 SPDK_RUN_UBSAN=1 00:00:47.045 NET_TYPE=virt 00:00:47.045 SPDK_JSONRPC_GO_CLIENT=1 00:00:47.045 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:47.052 RUN_NIGHTLY=0 00:00:47.054 [Pipeline] } 00:00:47.071 [Pipeline] // stage 00:00:47.092 [Pipeline] stage 00:00:47.095 [Pipeline] { (Run VM) 00:00:47.114 [Pipeline] sh 00:00:47.394 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:47.394 + echo 'Start stage prepare_nvme.sh' 00:00:47.394 Start stage prepare_nvme.sh 00:00:47.394 + [[ -n 3 ]] 00:00:47.394 + disk_prefix=ex3 00:00:47.394 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:00:47.394 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:00:47.395 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:00:47.395 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.395 ++ SPDK_TEST_NVMF=1 00:00:47.395 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.395 ++ SPDK_TEST_USDT=1 00:00:47.395 ++ SPDK_TEST_NVMF_MDNS=1 00:00:47.395 ++ SPDK_RUN_UBSAN=1 00:00:47.395 ++ NET_TYPE=virt 00:00:47.395 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:47.395 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:47.395 ++ RUN_NIGHTLY=0 00:00:47.395 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:47.395 + nvme_files=() 00:00:47.395 + declare -A nvme_files 00:00:47.395 + backend_dir=/var/lib/libvirt/images/backends 00:00:47.395 + nvme_files['nvme.img']=5G 00:00:47.395 + nvme_files['nvme-cmb.img']=5G 00:00:47.395 + nvme_files['nvme-multi0.img']=4G 00:00:47.395 + nvme_files['nvme-multi1.img']=4G 00:00:47.395 + nvme_files['nvme-multi2.img']=4G 00:00:47.395 + nvme_files['nvme-openstack.img']=8G 00:00:47.395 + nvme_files['nvme-zns.img']=5G 00:00:47.395 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:47.395 + (( SPDK_TEST_FTL == 1 )) 00:00:47.395 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:47.395 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:47.395 + for nvme in "${!nvme_files[@]}" 00:00:47.395 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:47.395 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:47.395 + for nvme in "${!nvme_files[@]}" 00:00:47.395 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:47.395 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:47.395 + for nvme in "${!nvme_files[@]}" 00:00:47.395 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:47.652 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:47.652 + for nvme in "${!nvme_files[@]}" 00:00:47.652 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:47.652 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:47.652 + for nvme in "${!nvme_files[@]}" 00:00:47.652 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:47.910 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:47.910 + for nvme in "${!nvme_files[@]}" 00:00:47.910 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:48.170 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:48.170 + for nvme in "${!nvme_files[@]}" 00:00:48.170 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:48.429 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:48.429 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:48.429 + echo 'End stage prepare_nvme.sh' 00:00:48.429 End stage prepare_nvme.sh 00:00:48.440 [Pipeline] sh 00:00:48.717 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:48.717 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:00:48.717 00:00:48.717 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:00:48.717 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:00:48.717 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:48.717 HELP=0 00:00:48.717 DRY_RUN=0 00:00:48.717 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:00:48.717 NVME_DISKS_TYPE=nvme,nvme, 00:00:48.717 NVME_AUTO_CREATE=0 00:00:48.717 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:00:48.717 NVME_CMB=,, 00:00:48.717 NVME_PMR=,, 00:00:48.717 NVME_ZNS=,, 00:00:48.717 NVME_MS=,, 00:00:48.717 NVME_FDP=,, 00:00:48.717 SPDK_VAGRANT_DISTRO=fedora38 00:00:48.717 SPDK_VAGRANT_VMCPU=10 00:00:48.717 SPDK_VAGRANT_VMRAM=12288 00:00:48.717 SPDK_VAGRANT_PROVIDER=libvirt 00:00:48.717 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:48.717 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:48.717 SPDK_OPENSTACK_NETWORK=0 00:00:48.717 VAGRANT_PACKAGE_BOX=0 00:00:48.717 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:48.717 FORCE_DISTRO=true 00:00:48.717 VAGRANT_BOX_VERSION= 00:00:48.717 EXTRA_VAGRANTFILES= 00:00:48.717 NIC_MODEL=e1000 00:00:48.717 00:00:48.717 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:00:48.717 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:51.995 Bringing machine 'default' up with 'libvirt' provider... 00:00:52.928 ==> default: Creating image (snapshot of base box volume). 00:00:52.928 ==> default: Creating domain with the following settings... 00:00:52.928 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721053052_1e19d7d5d7bf0b62c13c 00:00:52.928 ==> default: -- Domain type: kvm 00:00:52.928 ==> default: -- Cpus: 10 00:00:52.928 ==> default: -- Feature: acpi 00:00:52.928 ==> default: -- Feature: apic 00:00:52.928 ==> default: -- Feature: pae 00:00:52.928 ==> default: -- Memory: 12288M 00:00:52.928 ==> default: -- Memory Backing: hugepages: 00:00:52.928 ==> default: -- Management MAC: 00:00:52.928 ==> default: -- Loader: 00:00:52.928 ==> default: -- Nvram: 00:00:52.928 ==> default: -- Base box: spdk/fedora38 00:00:52.928 ==> default: -- Storage pool: default 00:00:52.928 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721053052_1e19d7d5d7bf0b62c13c.img (20G) 00:00:52.928 ==> default: -- Volume Cache: default 00:00:52.928 ==> default: -- Kernel: 00:00:52.928 ==> default: -- Initrd: 00:00:52.928 ==> default: -- Graphics Type: vnc 00:00:52.928 ==> default: -- Graphics Port: -1 00:00:52.928 ==> default: -- Graphics IP: 127.0.0.1 00:00:52.928 ==> default: -- Graphics Password: Not defined 00:00:52.928 ==> default: -- Video Type: cirrus 00:00:52.928 ==> default: -- Video VRAM: 9216 00:00:52.928 ==> default: -- Sound Type: 00:00:52.928 ==> default: -- Keymap: en-us 00:00:52.928 ==> default: -- TPM Path: 00:00:52.928 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:52.928 ==> default: -- Command line args: 00:00:52.928 ==> default: -> value=-device, 00:00:52.928 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:52.928 ==> default: -> value=-drive, 00:00:52.928 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:00:52.928 ==> default: -> value=-device, 00:00:52.928 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.928 ==> default: -> value=-device, 00:00:52.928 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:52.928 ==> default: -> value=-drive, 00:00:52.928 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:52.928 ==> default: -> value=-device, 00:00:52.928 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.928 ==> default: -> value=-drive, 00:00:52.928 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:52.928 ==> default: -> value=-device, 00:00:52.928 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.928 ==> default: -> value=-drive, 00:00:52.928 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:52.928 ==> default: -> value=-device, 00:00:52.929 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.929 ==> default: Creating shared folders metadata... 00:00:52.929 ==> default: Starting domain. 00:00:54.329 ==> default: Waiting for domain to get an IP address... 00:01:12.543 ==> default: Waiting for SSH to become available... 00:01:13.915 ==> default: Configuring and enabling network interfaces... 00:01:18.097 default: SSH address: 192.168.121.60:22 00:01:18.097 default: SSH username: vagrant 00:01:18.097 default: SSH auth method: private key 00:01:19.470 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:27.579 ==> default: Mounting SSHFS shared folder... 00:01:28.146 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:28.146 ==> default: Checking Mount.. 00:01:29.582 ==> default: Folder Successfully Mounted! 00:01:29.582 ==> default: Running provisioner: file... 00:01:30.146 default: ~/.gitconfig => .gitconfig 00:01:30.404 00:01:30.404 SUCCESS! 00:01:30.404 00:01:30.404 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:30.404 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:30.404 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:30.404 00:01:30.413 [Pipeline] } 00:01:30.433 [Pipeline] // stage 00:01:30.444 [Pipeline] dir 00:01:30.444 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:01:30.446 [Pipeline] { 00:01:30.458 [Pipeline] catchError 00:01:30.459 [Pipeline] { 00:01:30.469 [Pipeline] sh 00:01:30.742 + vagrant ssh-config --host vagrant 00:01:30.742 + sed -ne /^Host/,$p 00:01:30.742 + tee ssh_conf 00:01:34.924 Host vagrant 00:01:34.924 HostName 192.168.121.60 00:01:34.924 User vagrant 00:01:34.924 Port 22 00:01:34.924 UserKnownHostsFile /dev/null 00:01:34.924 StrictHostKeyChecking no 00:01:34.924 PasswordAuthentication no 00:01:34.924 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:34.924 IdentitiesOnly yes 00:01:34.924 LogLevel FATAL 00:01:34.924 ForwardAgent yes 00:01:34.924 ForwardX11 yes 00:01:34.924 00:01:34.939 [Pipeline] withEnv 00:01:34.941 [Pipeline] { 00:01:34.955 [Pipeline] sh 00:01:35.230 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:35.230 source /etc/os-release 00:01:35.230 [[ -e /image.version ]] && img=$(< /image.version) 00:01:35.230 # Minimal, systemd-like check. 00:01:35.230 if [[ -e /.dockerenv ]]; then 00:01:35.230 # Clear garbage from the node's name: 00:01:35.230 # agt-er_autotest_547-896 -> autotest_547-896 00:01:35.230 # $HOSTNAME is the actual container id 00:01:35.230 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:35.230 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:35.230 # We can assume this is a mount from a host where container is running, 00:01:35.230 # so fetch its hostname to easily identify the target swarm worker. 00:01:35.230 container="$(< /etc/hostname) ($agent)" 00:01:35.230 else 00:01:35.230 # Fallback 00:01:35.230 container=$agent 00:01:35.230 fi 00:01:35.230 fi 00:01:35.230 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:35.230 00:01:35.239 [Pipeline] } 00:01:35.257 [Pipeline] // withEnv 00:01:35.265 [Pipeline] setCustomBuildProperty 00:01:35.277 [Pipeline] stage 00:01:35.279 [Pipeline] { (Tests) 00:01:35.292 [Pipeline] sh 00:01:35.565 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:35.582 [Pipeline] sh 00:01:35.860 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:35.876 [Pipeline] timeout 00:01:35.876 Timeout set to expire in 40 min 00:01:35.878 [Pipeline] { 00:01:35.894 [Pipeline] sh 00:01:36.170 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:36.736 HEAD is now at 72fc6988f nvmf: add nvmf_update_mdns_prr 00:01:36.753 [Pipeline] sh 00:01:37.028 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:37.297 [Pipeline] sh 00:01:37.630 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:37.659 [Pipeline] sh 00:01:37.934 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:38.191 ++ readlink -f spdk_repo 00:01:38.191 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:38.191 + [[ -n /home/vagrant/spdk_repo ]] 00:01:38.191 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:38.191 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:38.191 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:38.191 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:38.191 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:38.191 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:38.191 + cd /home/vagrant/spdk_repo 00:01:38.191 + source /etc/os-release 00:01:38.191 ++ NAME='Fedora Linux' 00:01:38.191 ++ VERSION='38 (Cloud Edition)' 00:01:38.191 ++ ID=fedora 00:01:38.191 ++ VERSION_ID=38 00:01:38.191 ++ VERSION_CODENAME= 00:01:38.191 ++ PLATFORM_ID=platform:f38 00:01:38.191 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:38.191 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:38.191 ++ LOGO=fedora-logo-icon 00:01:38.191 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:38.191 ++ HOME_URL=https://fedoraproject.org/ 00:01:38.191 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:38.191 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:38.191 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:38.191 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:38.191 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:38.191 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:38.191 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:38.191 ++ SUPPORT_END=2024-05-14 00:01:38.191 ++ VARIANT='Cloud Edition' 00:01:38.191 ++ VARIANT_ID=cloud 00:01:38.191 + uname -a 00:01:38.191 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:38.191 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:38.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:38.449 Hugepages 00:01:38.449 node hugesize free / total 00:01:38.449 node0 1048576kB 0 / 0 00:01:38.449 node0 2048kB 0 / 0 00:01:38.449 00:01:38.449 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:38.707 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:38.707 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:38.707 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:38.707 + rm -f /tmp/spdk-ld-path 00:01:38.707 + source autorun-spdk.conf 00:01:38.707 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.707 ++ SPDK_TEST_NVMF=1 00:01:38.707 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.707 ++ SPDK_TEST_USDT=1 00:01:38.707 ++ SPDK_TEST_NVMF_MDNS=1 00:01:38.707 ++ SPDK_RUN_UBSAN=1 00:01:38.707 ++ NET_TYPE=virt 00:01:38.707 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:38.707 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.707 ++ RUN_NIGHTLY=0 00:01:38.707 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:38.707 + [[ -n '' ]] 00:01:38.707 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:38.707 + for M in /var/spdk/build-*-manifest.txt 00:01:38.707 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:38.707 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.707 + for M in /var/spdk/build-*-manifest.txt 00:01:38.707 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:38.707 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.707 ++ uname 00:01:38.707 + [[ Linux == \L\i\n\u\x ]] 00:01:38.707 + sudo dmesg -T 00:01:38.707 + sudo dmesg --clear 00:01:38.707 + dmesg_pid=5164 00:01:38.707 + [[ Fedora Linux == FreeBSD ]] 00:01:38.707 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.707 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.707 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:38.707 + [[ -x /usr/src/fio-static/fio ]] 00:01:38.707 + sudo dmesg -Tw 00:01:38.707 + export FIO_BIN=/usr/src/fio-static/fio 00:01:38.707 + FIO_BIN=/usr/src/fio-static/fio 00:01:38.707 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:38.707 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:38.707 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:38.707 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.707 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.707 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:38.707 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.707 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.707 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:38.707 Test configuration: 00:01:38.707 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.707 SPDK_TEST_NVMF=1 00:01:38.707 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:38.707 SPDK_TEST_USDT=1 00:01:38.707 SPDK_TEST_NVMF_MDNS=1 00:01:38.707 SPDK_RUN_UBSAN=1 00:01:38.707 NET_TYPE=virt 00:01:38.707 SPDK_JSONRPC_GO_CLIENT=1 00:01:38.707 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.707 RUN_NIGHTLY=0 14:18:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:38.707 14:18:18 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:38.707 14:18:18 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:38.707 14:18:18 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:38.707 14:18:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.707 14:18:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.707 14:18:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.707 14:18:18 -- paths/export.sh@5 -- $ export PATH 00:01:38.707 14:18:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.707 14:18:18 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:38.707 14:18:18 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:38.965 14:18:18 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721053098.XXXXXX 00:01:38.965 14:18:18 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721053098.2lvgBw 00:01:38.965 14:18:18 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:38.965 14:18:18 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:38.965 14:18:18 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:38.965 14:18:18 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:38.965 14:18:18 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:38.965 14:18:18 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:38.965 14:18:18 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:38.965 14:18:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.965 14:18:18 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:01:38.965 14:18:18 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:38.965 14:18:18 -- pm/common@17 -- $ local monitor 00:01:38.965 14:18:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.965 14:18:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.965 14:18:18 -- pm/common@25 -- $ sleep 1 00:01:38.965 14:18:18 -- pm/common@21 -- $ date +%s 00:01:38.965 14:18:18 -- pm/common@21 -- $ date +%s 00:01:38.965 14:18:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721053098 00:01:38.965 14:18:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721053098 00:01:38.965 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721053098_collect-vmstat.pm.log 00:01:38.965 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721053098_collect-cpu-load.pm.log 00:01:39.898 14:18:19 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:39.898 14:18:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:39.898 14:18:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:39.898 14:18:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:39.898 14:18:19 -- spdk/autobuild.sh@16 -- $ date -u 00:01:39.898 Mon Jul 15 02:18:19 PM UTC 2024 00:01:39.898 14:18:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:39.898 v24.09-pre-204-g72fc6988f 00:01:39.898 14:18:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:39.898 14:18:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:39.898 14:18:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:39.898 14:18:19 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:39.898 14:18:19 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:39.898 14:18:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.898 ************************************ 00:01:39.898 START TEST ubsan 00:01:39.898 ************************************ 00:01:39.898 using ubsan 00:01:39.898 14:18:19 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:39.898 00:01:39.898 real 0m0.000s 00:01:39.898 user 0m0.000s 00:01:39.898 sys 0m0.000s 00:01:39.898 14:18:19 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:39.898 14:18:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:39.898 ************************************ 00:01:39.898 END TEST ubsan 00:01:39.898 ************************************ 00:01:39.898 14:18:19 -- common/autotest_common.sh@1142 -- $ return 0 00:01:39.898 14:18:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:39.898 14:18:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:39.898 14:18:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:39.898 14:18:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:39.898 14:18:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:39.898 14:18:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:39.898 14:18:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:39.898 14:18:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:39.898 14:18:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:01:40.156 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:40.156 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:40.415 Using 'verbs' RDMA provider 00:01:53.619 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:05.849 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:05.849 go version go1.21.1 linux/amd64 00:02:06.107 Creating mk/config.mk...done. 00:02:06.107 Creating mk/cc.flags.mk...done. 00:02:06.107 Type 'make' to build. 00:02:06.107 14:18:45 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:06.107 14:18:45 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:06.107 14:18:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:06.107 14:18:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.107 ************************************ 00:02:06.107 START TEST make 00:02:06.107 ************************************ 00:02:06.107 14:18:45 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:06.365 make[1]: Nothing to be done for 'all'. 00:02:28.287 The Meson build system 00:02:28.287 Version: 1.3.1 00:02:28.287 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:28.287 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:28.287 Build type: native build 00:02:28.287 Program cat found: YES (/usr/bin/cat) 00:02:28.287 Project name: DPDK 00:02:28.287 Project version: 24.03.0 00:02:28.287 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:28.287 C linker for the host machine: cc ld.bfd 2.39-16 00:02:28.287 Host machine cpu family: x86_64 00:02:28.287 Host machine cpu: x86_64 00:02:28.287 Message: ## Building in Developer Mode ## 00:02:28.287 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:28.287 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:28.287 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:28.287 Program python3 found: YES (/usr/bin/python3) 00:02:28.287 Program cat found: YES (/usr/bin/cat) 00:02:28.287 Compiler for C supports arguments -march=native: YES 00:02:28.287 Checking for size of "void *" : 8 00:02:28.287 Checking for size of "void *" : 8 (cached) 00:02:28.287 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:28.287 Library m found: YES 00:02:28.287 Library numa found: YES 00:02:28.287 Has header "numaif.h" : YES 00:02:28.287 Library fdt found: NO 00:02:28.287 Library execinfo found: NO 00:02:28.287 Has header "execinfo.h" : YES 00:02:28.287 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:28.287 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:28.287 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:28.287 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:28.287 Run-time dependency openssl found: YES 3.0.9 00:02:28.287 Run-time dependency libpcap found: YES 1.10.4 00:02:28.287 Has header "pcap.h" with dependency libpcap: YES 00:02:28.287 Compiler for C supports arguments -Wcast-qual: YES 00:02:28.287 Compiler for C supports arguments -Wdeprecated: YES 00:02:28.287 Compiler for C supports arguments -Wformat: YES 00:02:28.287 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:28.287 Compiler for C supports arguments -Wformat-security: NO 00:02:28.287 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:28.287 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:28.287 Compiler for C supports arguments -Wnested-externs: YES 00:02:28.287 Compiler for C supports arguments -Wold-style-definition: YES 00:02:28.287 Compiler for C supports arguments -Wpointer-arith: YES 00:02:28.287 Compiler for C supports arguments -Wsign-compare: YES 00:02:28.287 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:28.287 Compiler for C supports arguments -Wundef: YES 00:02:28.287 Compiler for C supports arguments -Wwrite-strings: YES 00:02:28.287 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:28.287 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:28.287 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:28.287 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:28.287 Program objdump found: YES (/usr/bin/objdump) 00:02:28.287 Compiler for C supports arguments -mavx512f: YES 00:02:28.287 Checking if "AVX512 checking" compiles: YES 00:02:28.287 Fetching value of define "__SSE4_2__" : 1 00:02:28.287 Fetching value of define "__AES__" : 1 00:02:28.287 Fetching value of define "__AVX__" : 1 00:02:28.287 Fetching value of define "__AVX2__" : 1 00:02:28.287 Fetching value of define "__AVX512BW__" : (undefined) 00:02:28.287 Fetching value of define "__AVX512CD__" : (undefined) 00:02:28.287 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:28.287 Fetching value of define "__AVX512F__" : (undefined) 00:02:28.287 Fetching value of define "__AVX512VL__" : (undefined) 00:02:28.287 Fetching value of define "__PCLMUL__" : 1 00:02:28.287 Fetching value of define "__RDRND__" : 1 00:02:28.287 Fetching value of define "__RDSEED__" : 1 00:02:28.287 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:28.287 Fetching value of define "__znver1__" : (undefined) 00:02:28.287 Fetching value of define "__znver2__" : (undefined) 00:02:28.287 Fetching value of define "__znver3__" : (undefined) 00:02:28.287 Fetching value of define "__znver4__" : (undefined) 00:02:28.288 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:28.288 Message: lib/log: Defining dependency "log" 00:02:28.288 Message: lib/kvargs: Defining dependency "kvargs" 00:02:28.288 Message: lib/telemetry: Defining dependency "telemetry" 00:02:28.288 Checking for function "getentropy" : NO 00:02:28.288 Message: lib/eal: Defining dependency "eal" 00:02:28.288 Message: lib/ring: Defining dependency "ring" 00:02:28.288 Message: lib/rcu: Defining dependency "rcu" 00:02:28.288 Message: lib/mempool: Defining dependency "mempool" 00:02:28.288 Message: lib/mbuf: Defining dependency "mbuf" 00:02:28.288 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:28.288 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:28.288 Compiler for C supports arguments -mpclmul: YES 00:02:28.288 Compiler for C supports arguments -maes: YES 00:02:28.288 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:28.288 Compiler for C supports arguments -mavx512bw: YES 00:02:28.288 Compiler for C supports arguments -mavx512dq: YES 00:02:28.288 Compiler for C supports arguments -mavx512vl: YES 00:02:28.288 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:28.288 Compiler for C supports arguments -mavx2: YES 00:02:28.288 Compiler for C supports arguments -mavx: YES 00:02:28.288 Message: lib/net: Defining dependency "net" 00:02:28.288 Message: lib/meter: Defining dependency "meter" 00:02:28.288 Message: lib/ethdev: Defining dependency "ethdev" 00:02:28.288 Message: lib/pci: Defining dependency "pci" 00:02:28.288 Message: lib/cmdline: Defining dependency "cmdline" 00:02:28.288 Message: lib/hash: Defining dependency "hash" 00:02:28.288 Message: lib/timer: Defining dependency "timer" 00:02:28.288 Message: lib/compressdev: Defining dependency "compressdev" 00:02:28.288 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:28.288 Message: lib/dmadev: Defining dependency "dmadev" 00:02:28.288 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:28.288 Message: lib/power: Defining dependency "power" 00:02:28.288 Message: lib/reorder: Defining dependency "reorder" 00:02:28.288 Message: lib/security: Defining dependency "security" 00:02:28.288 Has header "linux/userfaultfd.h" : YES 00:02:28.288 Has header "linux/vduse.h" : YES 00:02:28.288 Message: lib/vhost: Defining dependency "vhost" 00:02:28.288 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:28.288 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:28.288 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:28.288 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:28.288 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:28.288 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:28.288 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:28.288 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:28.288 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:28.288 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:28.288 Program doxygen found: YES (/usr/bin/doxygen) 00:02:28.288 Configuring doxy-api-html.conf using configuration 00:02:28.288 Configuring doxy-api-man.conf using configuration 00:02:28.288 Program mandb found: YES (/usr/bin/mandb) 00:02:28.288 Program sphinx-build found: NO 00:02:28.288 Configuring rte_build_config.h using configuration 00:02:28.288 Message: 00:02:28.288 ================= 00:02:28.288 Applications Enabled 00:02:28.288 ================= 00:02:28.288 00:02:28.288 apps: 00:02:28.288 00:02:28.288 00:02:28.288 Message: 00:02:28.288 ================= 00:02:28.288 Libraries Enabled 00:02:28.288 ================= 00:02:28.288 00:02:28.288 libs: 00:02:28.288 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:28.288 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:28.288 cryptodev, dmadev, power, reorder, security, vhost, 00:02:28.288 00:02:28.288 Message: 00:02:28.288 =============== 00:02:28.288 Drivers Enabled 00:02:28.288 =============== 00:02:28.288 00:02:28.288 common: 00:02:28.288 00:02:28.288 bus: 00:02:28.288 pci, vdev, 00:02:28.288 mempool: 00:02:28.288 ring, 00:02:28.288 dma: 00:02:28.288 00:02:28.288 net: 00:02:28.288 00:02:28.288 crypto: 00:02:28.288 00:02:28.288 compress: 00:02:28.288 00:02:28.288 vdpa: 00:02:28.288 00:02:28.288 00:02:28.288 Message: 00:02:28.288 ================= 00:02:28.288 Content Skipped 00:02:28.288 ================= 00:02:28.288 00:02:28.288 apps: 00:02:28.288 dumpcap: explicitly disabled via build config 00:02:28.288 graph: explicitly disabled via build config 00:02:28.288 pdump: explicitly disabled via build config 00:02:28.288 proc-info: explicitly disabled via build config 00:02:28.288 test-acl: explicitly disabled via build config 00:02:28.288 test-bbdev: explicitly disabled via build config 00:02:28.288 test-cmdline: explicitly disabled via build config 00:02:28.288 test-compress-perf: explicitly disabled via build config 00:02:28.288 test-crypto-perf: explicitly disabled via build config 00:02:28.288 test-dma-perf: explicitly disabled via build config 00:02:28.288 test-eventdev: explicitly disabled via build config 00:02:28.288 test-fib: explicitly disabled via build config 00:02:28.288 test-flow-perf: explicitly disabled via build config 00:02:28.288 test-gpudev: explicitly disabled via build config 00:02:28.288 test-mldev: explicitly disabled via build config 00:02:28.288 test-pipeline: explicitly disabled via build config 00:02:28.288 test-pmd: explicitly disabled via build config 00:02:28.288 test-regex: explicitly disabled via build config 00:02:28.288 test-sad: explicitly disabled via build config 00:02:28.288 test-security-perf: explicitly disabled via build config 00:02:28.288 00:02:28.288 libs: 00:02:28.288 argparse: explicitly disabled via build config 00:02:28.288 metrics: explicitly disabled via build config 00:02:28.288 acl: explicitly disabled via build config 00:02:28.288 bbdev: explicitly disabled via build config 00:02:28.288 bitratestats: explicitly disabled via build config 00:02:28.288 bpf: explicitly disabled via build config 00:02:28.288 cfgfile: explicitly disabled via build config 00:02:28.288 distributor: explicitly disabled via build config 00:02:28.288 efd: explicitly disabled via build config 00:02:28.288 eventdev: explicitly disabled via build config 00:02:28.288 dispatcher: explicitly disabled via build config 00:02:28.288 gpudev: explicitly disabled via build config 00:02:28.288 gro: explicitly disabled via build config 00:02:28.288 gso: explicitly disabled via build config 00:02:28.288 ip_frag: explicitly disabled via build config 00:02:28.288 jobstats: explicitly disabled via build config 00:02:28.288 latencystats: explicitly disabled via build config 00:02:28.288 lpm: explicitly disabled via build config 00:02:28.288 member: explicitly disabled via build config 00:02:28.288 pcapng: explicitly disabled via build config 00:02:28.288 rawdev: explicitly disabled via build config 00:02:28.288 regexdev: explicitly disabled via build config 00:02:28.288 mldev: explicitly disabled via build config 00:02:28.288 rib: explicitly disabled via build config 00:02:28.288 sched: explicitly disabled via build config 00:02:28.288 stack: explicitly disabled via build config 00:02:28.288 ipsec: explicitly disabled via build config 00:02:28.288 pdcp: explicitly disabled via build config 00:02:28.288 fib: explicitly disabled via build config 00:02:28.288 port: explicitly disabled via build config 00:02:28.288 pdump: explicitly disabled via build config 00:02:28.288 table: explicitly disabled via build config 00:02:28.288 pipeline: explicitly disabled via build config 00:02:28.288 graph: explicitly disabled via build config 00:02:28.288 node: explicitly disabled via build config 00:02:28.288 00:02:28.288 drivers: 00:02:28.288 common/cpt: not in enabled drivers build config 00:02:28.288 common/dpaax: not in enabled drivers build config 00:02:28.288 common/iavf: not in enabled drivers build config 00:02:28.288 common/idpf: not in enabled drivers build config 00:02:28.288 common/ionic: not in enabled drivers build config 00:02:28.288 common/mvep: not in enabled drivers build config 00:02:28.288 common/octeontx: not in enabled drivers build config 00:02:28.288 bus/auxiliary: not in enabled drivers build config 00:02:28.288 bus/cdx: not in enabled drivers build config 00:02:28.288 bus/dpaa: not in enabled drivers build config 00:02:28.288 bus/fslmc: not in enabled drivers build config 00:02:28.288 bus/ifpga: not in enabled drivers build config 00:02:28.288 bus/platform: not in enabled drivers build config 00:02:28.288 bus/uacce: not in enabled drivers build config 00:02:28.288 bus/vmbus: not in enabled drivers build config 00:02:28.288 common/cnxk: not in enabled drivers build config 00:02:28.288 common/mlx5: not in enabled drivers build config 00:02:28.288 common/nfp: not in enabled drivers build config 00:02:28.288 common/nitrox: not in enabled drivers build config 00:02:28.288 common/qat: not in enabled drivers build config 00:02:28.288 common/sfc_efx: not in enabled drivers build config 00:02:28.288 mempool/bucket: not in enabled drivers build config 00:02:28.288 mempool/cnxk: not in enabled drivers build config 00:02:28.288 mempool/dpaa: not in enabled drivers build config 00:02:28.288 mempool/dpaa2: not in enabled drivers build config 00:02:28.288 mempool/octeontx: not in enabled drivers build config 00:02:28.288 mempool/stack: not in enabled drivers build config 00:02:28.288 dma/cnxk: not in enabled drivers build config 00:02:28.288 dma/dpaa: not in enabled drivers build config 00:02:28.288 dma/dpaa2: not in enabled drivers build config 00:02:28.288 dma/hisilicon: not in enabled drivers build config 00:02:28.288 dma/idxd: not in enabled drivers build config 00:02:28.288 dma/ioat: not in enabled drivers build config 00:02:28.288 dma/skeleton: not in enabled drivers build config 00:02:28.288 net/af_packet: not in enabled drivers build config 00:02:28.288 net/af_xdp: not in enabled drivers build config 00:02:28.288 net/ark: not in enabled drivers build config 00:02:28.288 net/atlantic: not in enabled drivers build config 00:02:28.288 net/avp: not in enabled drivers build config 00:02:28.288 net/axgbe: not in enabled drivers build config 00:02:28.289 net/bnx2x: not in enabled drivers build config 00:02:28.289 net/bnxt: not in enabled drivers build config 00:02:28.289 net/bonding: not in enabled drivers build config 00:02:28.289 net/cnxk: not in enabled drivers build config 00:02:28.289 net/cpfl: not in enabled drivers build config 00:02:28.289 net/cxgbe: not in enabled drivers build config 00:02:28.289 net/dpaa: not in enabled drivers build config 00:02:28.289 net/dpaa2: not in enabled drivers build config 00:02:28.289 net/e1000: not in enabled drivers build config 00:02:28.289 net/ena: not in enabled drivers build config 00:02:28.289 net/enetc: not in enabled drivers build config 00:02:28.289 net/enetfec: not in enabled drivers build config 00:02:28.289 net/enic: not in enabled drivers build config 00:02:28.289 net/failsafe: not in enabled drivers build config 00:02:28.289 net/fm10k: not in enabled drivers build config 00:02:28.289 net/gve: not in enabled drivers build config 00:02:28.289 net/hinic: not in enabled drivers build config 00:02:28.289 net/hns3: not in enabled drivers build config 00:02:28.289 net/i40e: not in enabled drivers build config 00:02:28.289 net/iavf: not in enabled drivers build config 00:02:28.289 net/ice: not in enabled drivers build config 00:02:28.289 net/idpf: not in enabled drivers build config 00:02:28.289 net/igc: not in enabled drivers build config 00:02:28.289 net/ionic: not in enabled drivers build config 00:02:28.289 net/ipn3ke: not in enabled drivers build config 00:02:28.289 net/ixgbe: not in enabled drivers build config 00:02:28.289 net/mana: not in enabled drivers build config 00:02:28.289 net/memif: not in enabled drivers build config 00:02:28.289 net/mlx4: not in enabled drivers build config 00:02:28.289 net/mlx5: not in enabled drivers build config 00:02:28.289 net/mvneta: not in enabled drivers build config 00:02:28.289 net/mvpp2: not in enabled drivers build config 00:02:28.289 net/netvsc: not in enabled drivers build config 00:02:28.289 net/nfb: not in enabled drivers build config 00:02:28.289 net/nfp: not in enabled drivers build config 00:02:28.289 net/ngbe: not in enabled drivers build config 00:02:28.289 net/null: not in enabled drivers build config 00:02:28.289 net/octeontx: not in enabled drivers build config 00:02:28.289 net/octeon_ep: not in enabled drivers build config 00:02:28.289 net/pcap: not in enabled drivers build config 00:02:28.289 net/pfe: not in enabled drivers build config 00:02:28.289 net/qede: not in enabled drivers build config 00:02:28.289 net/ring: not in enabled drivers build config 00:02:28.289 net/sfc: not in enabled drivers build config 00:02:28.289 net/softnic: not in enabled drivers build config 00:02:28.289 net/tap: not in enabled drivers build config 00:02:28.289 net/thunderx: not in enabled drivers build config 00:02:28.289 net/txgbe: not in enabled drivers build config 00:02:28.289 net/vdev_netvsc: not in enabled drivers build config 00:02:28.289 net/vhost: not in enabled drivers build config 00:02:28.289 net/virtio: not in enabled drivers build config 00:02:28.289 net/vmxnet3: not in enabled drivers build config 00:02:28.289 raw/*: missing internal dependency, "rawdev" 00:02:28.289 crypto/armv8: not in enabled drivers build config 00:02:28.289 crypto/bcmfs: not in enabled drivers build config 00:02:28.289 crypto/caam_jr: not in enabled drivers build config 00:02:28.289 crypto/ccp: not in enabled drivers build config 00:02:28.289 crypto/cnxk: not in enabled drivers build config 00:02:28.289 crypto/dpaa_sec: not in enabled drivers build config 00:02:28.289 crypto/dpaa2_sec: not in enabled drivers build config 00:02:28.289 crypto/ipsec_mb: not in enabled drivers build config 00:02:28.289 crypto/mlx5: not in enabled drivers build config 00:02:28.289 crypto/mvsam: not in enabled drivers build config 00:02:28.289 crypto/nitrox: not in enabled drivers build config 00:02:28.289 crypto/null: not in enabled drivers build config 00:02:28.289 crypto/octeontx: not in enabled drivers build config 00:02:28.289 crypto/openssl: not in enabled drivers build config 00:02:28.289 crypto/scheduler: not in enabled drivers build config 00:02:28.289 crypto/uadk: not in enabled drivers build config 00:02:28.289 crypto/virtio: not in enabled drivers build config 00:02:28.289 compress/isal: not in enabled drivers build config 00:02:28.289 compress/mlx5: not in enabled drivers build config 00:02:28.289 compress/nitrox: not in enabled drivers build config 00:02:28.289 compress/octeontx: not in enabled drivers build config 00:02:28.289 compress/zlib: not in enabled drivers build config 00:02:28.289 regex/*: missing internal dependency, "regexdev" 00:02:28.289 ml/*: missing internal dependency, "mldev" 00:02:28.289 vdpa/ifc: not in enabled drivers build config 00:02:28.289 vdpa/mlx5: not in enabled drivers build config 00:02:28.289 vdpa/nfp: not in enabled drivers build config 00:02:28.289 vdpa/sfc: not in enabled drivers build config 00:02:28.289 event/*: missing internal dependency, "eventdev" 00:02:28.289 baseband/*: missing internal dependency, "bbdev" 00:02:28.289 gpu/*: missing internal dependency, "gpudev" 00:02:28.289 00:02:28.289 00:02:28.289 Build targets in project: 85 00:02:28.289 00:02:28.289 DPDK 24.03.0 00:02:28.289 00:02:28.289 User defined options 00:02:28.289 buildtype : debug 00:02:28.289 default_library : shared 00:02:28.289 libdir : lib 00:02:28.289 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:28.289 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:28.289 c_link_args : 00:02:28.289 cpu_instruction_set: native 00:02:28.289 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:28.289 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:28.289 enable_docs : false 00:02:28.289 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:28.289 enable_kmods : false 00:02:28.289 max_lcores : 128 00:02:28.289 tests : false 00:02:28.289 00:02:28.289 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:28.289 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:28.289 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:28.289 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:28.289 [3/268] Linking static target lib/librte_kvargs.a 00:02:28.289 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:28.289 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:28.289 [6/268] Linking static target lib/librte_log.a 00:02:28.855 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.855 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:28.855 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:29.114 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:29.372 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:29.372 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:29.372 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:29.372 [14/268] Linking static target lib/librte_telemetry.a 00:02:29.372 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.630 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:29.630 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:29.630 [18/268] Linking target lib/librte_log.so.24.1 00:02:29.889 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:29.889 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:30.147 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:30.147 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:30.404 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.404 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:30.404 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:30.662 [26/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.662 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:30.662 [28/268] Linking target lib/librte_telemetry.so.24.1 00:02:30.920 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:30.920 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:30.920 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:31.178 [32/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:31.178 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:31.178 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:31.746 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:31.746 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:32.004 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:32.004 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:32.004 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:32.263 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:32.263 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:32.263 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:32.612 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:32.612 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:32.612 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:32.885 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:32.885 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:33.143 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:33.402 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:33.659 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:33.659 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:33.659 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:34.235 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:34.235 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:34.235 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:34.235 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:34.235 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:34.492 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:34.750 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:34.750 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:35.007 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:35.264 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:35.264 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:35.264 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:35.264 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:35.830 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:35.830 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:36.107 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:36.107 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:36.363 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:36.620 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:36.620 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:36.620 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:36.877 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:36.877 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:36.877 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:36.877 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:37.133 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:37.133 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:37.390 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:37.648 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:37.907 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:37.907 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.178 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:38.439 [85/268] Linking static target lib/librte_eal.a 00:02:38.439 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.439 [87/268] Linking static target lib/librte_ring.a 00:02:38.697 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:38.697 [89/268] Linking static target lib/librte_rcu.a 00:02:38.697 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:38.697 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:38.955 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:38.955 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:38.955 [94/268] Linking static target lib/librte_mempool.a 00:02:39.521 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.521 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:39.521 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:39.521 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.087 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:40.087 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:40.087 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.654 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:40.654 [103/268] Linking static target lib/librte_mbuf.a 00:02:40.912 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:40.912 [105/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.912 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:40.912 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:40.912 [108/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:40.912 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:41.171 [110/268] Linking static target lib/librte_net.a 00:02:41.428 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:41.428 [112/268] Linking static target lib/librte_meter.a 00:02:41.687 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.687 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:41.945 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.202 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.202 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:42.202 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:42.459 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:42.716 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:43.283 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:43.283 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:43.541 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:43.815 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:43.815 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:43.815 [126/268] Linking static target lib/librte_pci.a 00:02:43.815 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.077 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:44.077 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:44.077 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.335 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.335 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:44.335 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:44.591 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:44.591 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:44.591 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.591 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.591 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:44.591 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:44.591 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:44.591 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:44.591 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:44.591 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:44.848 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:44.848 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:45.106 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:45.670 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:45.670 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.670 [149/268] Linking static target lib/librte_cmdline.a 00:02:45.670 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:45.670 [151/268] Linking static target lib/librte_ethdev.a 00:02:45.670 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:46.236 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:46.236 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:46.236 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:46.236 [156/268] Linking static target lib/librte_timer.a 00:02:46.236 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:46.802 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.060 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.061 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.061 [161/268] Linking static target lib/librte_hash.a 00:02:47.061 [162/268] Linking static target lib/librte_compressdev.a 00:02:47.061 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.322 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:47.322 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:47.580 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:47.580 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.580 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:47.838 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:47.838 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:47.838 [171/268] Linking static target lib/librte_dmadev.a 00:02:48.096 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.096 [173/268] Linking static target lib/librte_cryptodev.a 00:02:48.096 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:48.354 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.354 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:48.612 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:48.612 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.870 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:49.128 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.387 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:49.387 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:49.387 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:49.387 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:49.645 [185/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:49.645 [186/268] Linking static target lib/librte_security.a 00:02:49.903 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:49.903 [188/268] Linking static target lib/librte_power.a 00:02:50.469 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:50.469 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:50.727 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:50.727 [192/268] Linking static target lib/librte_reorder.a 00:02:50.727 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:50.727 [194/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.727 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:51.290 [196/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.290 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.548 [198/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.548 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:51.806 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:51.806 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:51.806 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:52.064 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:52.064 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:52.698 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:52.698 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:52.698 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:52.955 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:52.955 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:52.955 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:52.955 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:53.213 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:53.213 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.213 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.213 [215/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:53.213 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:53.213 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:53.213 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:53.213 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.213 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:53.213 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:53.213 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:53.472 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:53.472 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.472 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.472 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.472 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:53.729 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.729 [229/268] Linking target lib/librte_eal.so.24.1 00:02:53.729 [230/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.986 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:53.986 [232/268] Linking target lib/librte_ring.so.24.1 00:02:53.986 [233/268] Linking target lib/librte_timer.so.24.1 00:02:53.986 [234/268] Linking target lib/librte_meter.so.24.1 00:02:53.986 [235/268] Linking target lib/librte_pci.so.24.1 00:02:53.986 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:53.986 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:53.986 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:53.986 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:53.986 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:53.986 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:53.986 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:53.986 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:54.244 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:54.244 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:54.244 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:54.244 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:54.244 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:54.244 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:54.501 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:54.501 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:54.501 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:54.501 [253/268] Linking target lib/librte_net.so.24.1 00:02:54.501 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:54.758 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:54.758 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:54.758 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:54.758 [258/268] Linking target lib/librte_hash.so.24.1 00:02:54.758 [259/268] Linking target lib/librte_security.so.24.1 00:02:54.758 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:54.758 [261/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:55.015 [262/268] Linking static target lib/librte_vhost.a 00:02:55.579 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.579 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:55.835 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:55.835 [266/268] Linking target lib/librte_power.so.24.1 00:02:56.397 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.397 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:56.397 INFO: autodetecting backend as ninja 00:02:56.397 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:57.770 CC lib/ut_mock/mock.o 00:02:57.770 CC lib/log/log_flags.o 00:02:57.770 CC lib/log/log_deprecated.o 00:02:57.770 CC lib/log/log.o 00:02:57.770 CC lib/ut/ut.o 00:02:57.770 LIB libspdk_ut.a 00:02:57.770 LIB libspdk_log.a 00:02:57.770 LIB libspdk_ut_mock.a 00:02:57.770 SO libspdk_ut.so.2.0 00:02:57.770 SO libspdk_ut_mock.so.6.0 00:02:57.770 SO libspdk_log.so.7.0 00:02:58.028 SYMLINK libspdk_ut_mock.so 00:02:58.028 SYMLINK libspdk_ut.so 00:02:58.028 SYMLINK libspdk_log.so 00:02:58.028 CC lib/ioat/ioat.o 00:02:58.028 CC lib/util/base64.o 00:02:58.028 CC lib/dma/dma.o 00:02:58.028 CC lib/util/bit_array.o 00:02:58.028 CC lib/util/cpuset.o 00:02:58.028 CC lib/util/crc16.o 00:02:58.028 CC lib/util/crc32.o 00:02:58.028 CC lib/util/crc32c.o 00:02:58.285 CXX lib/trace_parser/trace.o 00:02:58.285 CC lib/util/crc32_ieee.o 00:02:58.285 CC lib/vfio_user/host/vfio_user_pci.o 00:02:58.285 CC lib/util/crc64.o 00:02:58.285 CC lib/vfio_user/host/vfio_user.o 00:02:58.285 CC lib/util/dif.o 00:02:58.285 LIB libspdk_dma.a 00:02:58.545 CC lib/util/fd.o 00:02:58.545 SO libspdk_dma.so.4.0 00:02:58.545 CC lib/util/file.o 00:02:58.545 SYMLINK libspdk_dma.so 00:02:58.545 CC lib/util/hexlify.o 00:02:58.545 CC lib/util/iov.o 00:02:58.545 LIB libspdk_ioat.a 00:02:58.545 CC lib/util/math.o 00:02:58.545 SO libspdk_ioat.so.7.0 00:02:58.545 CC lib/util/pipe.o 00:02:58.545 SYMLINK libspdk_ioat.so 00:02:58.545 CC lib/util/strerror_tls.o 00:02:58.545 CC lib/util/string.o 00:02:58.804 CC lib/util/uuid.o 00:02:58.804 CC lib/util/fd_group.o 00:02:58.804 CC lib/util/xor.o 00:02:58.804 CC lib/util/zipf.o 00:02:58.804 LIB libspdk_vfio_user.a 00:02:58.804 SO libspdk_vfio_user.so.5.0 00:02:58.804 SYMLINK libspdk_vfio_user.so 00:02:59.062 LIB libspdk_util.a 00:02:59.062 SO libspdk_util.so.9.1 00:02:59.320 SYMLINK libspdk_util.so 00:02:59.578 LIB libspdk_trace_parser.a 00:02:59.578 SO libspdk_trace_parser.so.5.0 00:02:59.578 CC lib/rdma_provider/common.o 00:02:59.578 CC lib/idxd/idxd.o 00:02:59.578 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:59.578 CC lib/idxd/idxd_user.o 00:02:59.578 CC lib/json/json_parse.o 00:02:59.578 CC lib/vmd/vmd.o 00:02:59.578 CC lib/env_dpdk/env.o 00:02:59.578 CC lib/conf/conf.o 00:02:59.578 CC lib/rdma_utils/rdma_utils.o 00:02:59.578 SYMLINK libspdk_trace_parser.so 00:02:59.578 CC lib/vmd/led.o 00:02:59.836 CC lib/json/json_util.o 00:02:59.836 LIB libspdk_rdma_provider.a 00:02:59.836 LIB libspdk_conf.a 00:02:59.836 CC lib/idxd/idxd_kernel.o 00:02:59.836 CC lib/json/json_write.o 00:02:59.836 SO libspdk_conf.so.6.0 00:02:59.836 SO libspdk_rdma_provider.so.6.0 00:02:59.836 SYMLINK libspdk_rdma_provider.so 00:02:59.836 SYMLINK libspdk_conf.so 00:02:59.836 CC lib/env_dpdk/memory.o 00:02:59.836 CC lib/env_dpdk/pci.o 00:02:59.836 CC lib/env_dpdk/init.o 00:02:59.836 LIB libspdk_rdma_utils.a 00:03:00.094 CC lib/env_dpdk/threads.o 00:03:00.094 SO libspdk_rdma_utils.so.1.0 00:03:00.094 SYMLINK libspdk_rdma_utils.so 00:03:00.094 CC lib/env_dpdk/pci_ioat.o 00:03:00.094 CC lib/env_dpdk/pci_virtio.o 00:03:00.094 LIB libspdk_json.a 00:03:00.094 LIB libspdk_idxd.a 00:03:00.094 SO libspdk_json.so.6.0 00:03:00.094 SO libspdk_idxd.so.12.0 00:03:00.352 LIB libspdk_vmd.a 00:03:00.352 CC lib/env_dpdk/pci_vmd.o 00:03:00.352 CC lib/env_dpdk/pci_idxd.o 00:03:00.352 SO libspdk_vmd.so.6.0 00:03:00.352 SYMLINK libspdk_json.so 00:03:00.352 CC lib/env_dpdk/pci_event.o 00:03:00.352 SYMLINK libspdk_idxd.so 00:03:00.352 CC lib/env_dpdk/sigbus_handler.o 00:03:00.352 SYMLINK libspdk_vmd.so 00:03:00.352 CC lib/env_dpdk/pci_dpdk.o 00:03:00.352 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:00.610 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:00.610 CC lib/jsonrpc/jsonrpc_server.o 00:03:00.610 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:00.610 CC lib/jsonrpc/jsonrpc_client.o 00:03:00.610 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:00.867 LIB libspdk_jsonrpc.a 00:03:00.867 SO libspdk_jsonrpc.so.6.0 00:03:01.124 SYMLINK libspdk_jsonrpc.so 00:03:01.381 CC lib/rpc/rpc.o 00:03:01.639 LIB libspdk_env_dpdk.a 00:03:01.639 LIB libspdk_rpc.a 00:03:01.639 SO libspdk_rpc.so.6.0 00:03:01.639 SO libspdk_env_dpdk.so.14.1 00:03:01.639 SYMLINK libspdk_rpc.so 00:03:01.916 SYMLINK libspdk_env_dpdk.so 00:03:01.916 CC lib/notify/notify.o 00:03:01.916 CC lib/notify/notify_rpc.o 00:03:01.916 CC lib/keyring/keyring.o 00:03:01.916 CC lib/keyring/keyring_rpc.o 00:03:01.916 CC lib/trace/trace.o 00:03:01.916 CC lib/trace/trace_rpc.o 00:03:01.916 CC lib/trace/trace_flags.o 00:03:02.174 LIB libspdk_notify.a 00:03:02.174 SO libspdk_notify.so.6.0 00:03:02.174 SYMLINK libspdk_notify.so 00:03:02.174 LIB libspdk_trace.a 00:03:02.174 LIB libspdk_keyring.a 00:03:02.432 SO libspdk_keyring.so.1.0 00:03:02.432 SO libspdk_trace.so.10.0 00:03:02.432 SYMLINK libspdk_trace.so 00:03:02.432 SYMLINK libspdk_keyring.so 00:03:02.690 CC lib/thread/thread.o 00:03:02.690 CC lib/thread/iobuf.o 00:03:02.690 CC lib/sock/sock.o 00:03:02.690 CC lib/sock/sock_rpc.o 00:03:03.256 LIB libspdk_sock.a 00:03:03.256 SO libspdk_sock.so.10.0 00:03:03.256 SYMLINK libspdk_sock.so 00:03:03.514 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:03.514 CC lib/nvme/nvme_ctrlr.o 00:03:03.514 CC lib/nvme/nvme_fabric.o 00:03:03.514 CC lib/nvme/nvme_ns.o 00:03:03.514 CC lib/nvme/nvme_ns_cmd.o 00:03:03.514 CC lib/nvme/nvme_pcie_common.o 00:03:03.514 CC lib/nvme/nvme_pcie.o 00:03:03.514 CC lib/nvme/nvme_qpair.o 00:03:03.514 CC lib/nvme/nvme.o 00:03:04.454 LIB libspdk_thread.a 00:03:04.711 CC lib/nvme/nvme_quirks.o 00:03:04.711 CC lib/nvme/nvme_transport.o 00:03:04.711 SO libspdk_thread.so.10.1 00:03:04.711 CC lib/nvme/nvme_discovery.o 00:03:04.711 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:04.711 SYMLINK libspdk_thread.so 00:03:04.711 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:04.711 CC lib/nvme/nvme_tcp.o 00:03:04.711 CC lib/nvme/nvme_opal.o 00:03:04.969 CC lib/nvme/nvme_io_msg.o 00:03:05.226 CC lib/nvme/nvme_poll_group.o 00:03:05.226 CC lib/nvme/nvme_zns.o 00:03:05.483 CC lib/nvme/nvme_stubs.o 00:03:05.483 CC lib/nvme/nvme_auth.o 00:03:05.483 CC lib/accel/accel.o 00:03:05.741 CC lib/accel/accel_rpc.o 00:03:05.741 CC lib/blob/blobstore.o 00:03:05.741 CC lib/accel/accel_sw.o 00:03:06.396 CC lib/nvme/nvme_cuse.o 00:03:06.396 CC lib/nvme/nvme_rdma.o 00:03:06.396 CC lib/blob/request.o 00:03:06.396 CC lib/init/json_config.o 00:03:06.396 CC lib/init/subsystem.o 00:03:06.396 CC lib/virtio/virtio.o 00:03:06.654 CC lib/init/subsystem_rpc.o 00:03:06.654 CC lib/init/rpc.o 00:03:06.654 CC lib/virtio/virtio_vhost_user.o 00:03:06.654 CC lib/blob/zeroes.o 00:03:06.912 LIB libspdk_accel.a 00:03:06.912 SO libspdk_accel.so.15.1 00:03:06.912 LIB libspdk_init.a 00:03:06.912 SO libspdk_init.so.5.0 00:03:06.912 CC lib/virtio/virtio_vfio_user.o 00:03:06.912 CC lib/virtio/virtio_pci.o 00:03:06.912 SYMLINK libspdk_accel.so 00:03:06.912 CC lib/blob/blob_bs_dev.o 00:03:07.170 SYMLINK libspdk_init.so 00:03:07.170 CC lib/bdev/bdev.o 00:03:07.170 CC lib/bdev/bdev_rpc.o 00:03:07.428 CC lib/event/app.o 00:03:07.428 CC lib/event/reactor.o 00:03:07.428 CC lib/bdev/bdev_zone.o 00:03:07.428 CC lib/bdev/part.o 00:03:07.428 LIB libspdk_virtio.a 00:03:07.685 SO libspdk_virtio.so.7.0 00:03:07.685 CC lib/bdev/scsi_nvme.o 00:03:07.685 SYMLINK libspdk_virtio.so 00:03:07.685 CC lib/event/log_rpc.o 00:03:07.685 CC lib/event/app_rpc.o 00:03:07.685 CC lib/event/scheduler_static.o 00:03:08.251 LIB libspdk_event.a 00:03:08.251 SO libspdk_event.so.14.0 00:03:08.251 LIB libspdk_nvme.a 00:03:08.251 SYMLINK libspdk_event.so 00:03:08.508 SO libspdk_nvme.so.13.1 00:03:08.766 SYMLINK libspdk_nvme.so 00:03:09.698 LIB libspdk_blob.a 00:03:09.698 SO libspdk_blob.so.11.0 00:03:09.956 SYMLINK libspdk_blob.so 00:03:10.213 CC lib/blobfs/blobfs.o 00:03:10.213 CC lib/lvol/lvol.o 00:03:10.213 CC lib/blobfs/tree.o 00:03:10.470 LIB libspdk_bdev.a 00:03:10.470 SO libspdk_bdev.so.15.1 00:03:10.728 SYMLINK libspdk_bdev.so 00:03:10.990 CC lib/scsi/dev.o 00:03:10.990 CC lib/scsi/lun.o 00:03:10.990 CC lib/scsi/port.o 00:03:10.990 CC lib/scsi/scsi.o 00:03:10.990 CC lib/nbd/nbd.o 00:03:10.990 CC lib/nvmf/ctrlr.o 00:03:10.990 CC lib/ftl/ftl_core.o 00:03:10.990 CC lib/ublk/ublk.o 00:03:11.247 CC lib/scsi/scsi_bdev.o 00:03:11.247 CC lib/ublk/ublk_rpc.o 00:03:11.247 CC lib/scsi/scsi_pr.o 00:03:11.247 CC lib/scsi/scsi_rpc.o 00:03:11.247 LIB libspdk_lvol.a 00:03:11.247 SO libspdk_lvol.so.10.0 00:03:11.247 CC lib/nvmf/ctrlr_discovery.o 00:03:11.504 SYMLINK libspdk_lvol.so 00:03:11.504 CC lib/nvmf/ctrlr_bdev.o 00:03:11.504 LIB libspdk_blobfs.a 00:03:11.504 CC lib/nvmf/subsystem.o 00:03:11.504 SO libspdk_blobfs.so.10.0 00:03:11.504 CC lib/ftl/ftl_init.o 00:03:11.504 CC lib/nbd/nbd_rpc.o 00:03:11.762 SYMLINK libspdk_blobfs.so 00:03:11.762 CC lib/ftl/ftl_layout.o 00:03:11.762 CC lib/ftl/ftl_debug.o 00:03:11.762 LIB libspdk_nbd.a 00:03:11.762 LIB libspdk_ublk.a 00:03:11.762 SO libspdk_nbd.so.7.0 00:03:11.762 SO libspdk_ublk.so.3.0 00:03:12.020 CC lib/ftl/ftl_io.o 00:03:12.020 SYMLINK libspdk_nbd.so 00:03:12.020 CC lib/ftl/ftl_sb.o 00:03:12.020 CC lib/scsi/task.o 00:03:12.020 CC lib/ftl/ftl_l2p.o 00:03:12.020 SYMLINK libspdk_ublk.so 00:03:12.020 CC lib/ftl/ftl_l2p_flat.o 00:03:12.020 CC lib/nvmf/nvmf.o 00:03:12.020 CC lib/nvmf/nvmf_rpc.o 00:03:12.290 CC lib/nvmf/transport.o 00:03:12.290 CC lib/ftl/ftl_nv_cache.o 00:03:12.290 CC lib/ftl/ftl_band.o 00:03:12.290 LIB libspdk_scsi.a 00:03:12.290 CC lib/nvmf/tcp.o 00:03:12.548 CC lib/ftl/ftl_band_ops.o 00:03:12.548 SO libspdk_scsi.so.9.0 00:03:12.548 SYMLINK libspdk_scsi.so 00:03:12.548 CC lib/ftl/ftl_writer.o 00:03:12.805 CC lib/ftl/ftl_rq.o 00:03:13.062 CC lib/ftl/ftl_reloc.o 00:03:13.062 CC lib/ftl/ftl_l2p_cache.o 00:03:13.062 CC lib/ftl/ftl_p2l.o 00:03:13.319 CC lib/nvmf/stubs.o 00:03:13.319 CC lib/nvmf/mdns_server.o 00:03:13.577 CC lib/nvmf/rdma.o 00:03:13.577 CC lib/nvmf/auth.o 00:03:13.577 CC lib/ftl/mngt/ftl_mngt.o 00:03:13.577 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:13.834 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:13.834 CC lib/iscsi/conn.o 00:03:13.834 CC lib/vhost/vhost.o 00:03:14.091 CC lib/vhost/vhost_rpc.o 00:03:14.091 CC lib/vhost/vhost_scsi.o 00:03:14.091 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:14.091 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:14.091 CC lib/iscsi/init_grp.o 00:03:14.349 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:14.607 CC lib/vhost/vhost_blk.o 00:03:14.607 CC lib/iscsi/iscsi.o 00:03:14.607 CC lib/iscsi/md5.o 00:03:14.607 CC lib/iscsi/param.o 00:03:14.865 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:14.865 CC lib/iscsi/portal_grp.o 00:03:15.124 CC lib/vhost/rte_vhost_user.o 00:03:15.124 CC lib/iscsi/tgt_node.o 00:03:15.124 CC lib/iscsi/iscsi_subsystem.o 00:03:15.124 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:15.383 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:15.383 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:15.641 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:15.641 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:15.641 CC lib/iscsi/iscsi_rpc.o 00:03:15.899 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:15.899 CC lib/ftl/utils/ftl_conf.o 00:03:15.899 CC lib/ftl/utils/ftl_md.o 00:03:15.899 CC lib/iscsi/task.o 00:03:16.157 CC lib/ftl/utils/ftl_mempool.o 00:03:16.157 CC lib/ftl/utils/ftl_bitmap.o 00:03:16.157 CC lib/ftl/utils/ftl_property.o 00:03:16.416 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:16.416 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:16.416 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:16.416 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:16.416 LIB libspdk_nvmf.a 00:03:16.674 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:16.674 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:16.674 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:16.674 SO libspdk_nvmf.so.18.1 00:03:16.674 LIB libspdk_vhost.a 00:03:16.674 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:16.674 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:16.674 SO libspdk_vhost.so.8.0 00:03:16.932 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:16.932 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:16.932 CC lib/ftl/base/ftl_base_dev.o 00:03:16.932 CC lib/ftl/base/ftl_base_bdev.o 00:03:16.932 CC lib/ftl/ftl_trace.o 00:03:16.932 SYMLINK libspdk_vhost.so 00:03:16.932 SYMLINK libspdk_nvmf.so 00:03:17.189 LIB libspdk_iscsi.a 00:03:17.189 SO libspdk_iscsi.so.8.0 00:03:17.189 LIB libspdk_ftl.a 00:03:17.446 SYMLINK libspdk_iscsi.so 00:03:17.446 SO libspdk_ftl.so.9.0 00:03:18.011 SYMLINK libspdk_ftl.so 00:03:18.575 CC module/env_dpdk/env_dpdk_rpc.o 00:03:18.575 CC module/sock/posix/posix.o 00:03:18.575 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:18.575 CC module/accel/error/accel_error.o 00:03:18.575 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:18.575 CC module/blob/bdev/blob_bdev.o 00:03:18.575 CC module/accel/iaa/accel_iaa.o 00:03:18.575 CC module/keyring/file/keyring.o 00:03:18.575 CC module/accel/ioat/accel_ioat.o 00:03:18.575 CC module/accel/dsa/accel_dsa.o 00:03:18.832 LIB libspdk_env_dpdk_rpc.a 00:03:18.832 SO libspdk_env_dpdk_rpc.so.6.0 00:03:18.832 SYMLINK libspdk_env_dpdk_rpc.so 00:03:18.832 LIB libspdk_scheduler_dpdk_governor.a 00:03:18.832 CC module/keyring/file/keyring_rpc.o 00:03:18.832 CC module/accel/error/accel_error_rpc.o 00:03:18.832 LIB libspdk_scheduler_dynamic.a 00:03:18.832 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:19.090 CC module/accel/iaa/accel_iaa_rpc.o 00:03:19.090 CC module/accel/ioat/accel_ioat_rpc.o 00:03:19.090 SO libspdk_scheduler_dynamic.so.4.0 00:03:19.090 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:19.090 SYMLINK libspdk_scheduler_dynamic.so 00:03:19.090 LIB libspdk_blob_bdev.a 00:03:19.090 CC module/scheduler/gscheduler/gscheduler.o 00:03:19.090 CC module/accel/dsa/accel_dsa_rpc.o 00:03:19.090 SO libspdk_blob_bdev.so.11.0 00:03:19.090 LIB libspdk_keyring_file.a 00:03:19.090 LIB libspdk_accel_ioat.a 00:03:19.090 LIB libspdk_accel_error.a 00:03:19.090 SO libspdk_keyring_file.so.1.0 00:03:19.090 SO libspdk_accel_ioat.so.6.0 00:03:19.090 SYMLINK libspdk_blob_bdev.so 00:03:19.348 LIB libspdk_accel_iaa.a 00:03:19.348 SO libspdk_accel_error.so.2.0 00:03:19.348 SO libspdk_accel_iaa.so.3.0 00:03:19.348 SYMLINK libspdk_keyring_file.so 00:03:19.348 SYMLINK libspdk_accel_ioat.so 00:03:19.348 CC module/keyring/linux/keyring.o 00:03:19.348 CC module/keyring/linux/keyring_rpc.o 00:03:19.348 LIB libspdk_scheduler_gscheduler.a 00:03:19.348 LIB libspdk_accel_dsa.a 00:03:19.348 SYMLINK libspdk_accel_error.so 00:03:19.348 SYMLINK libspdk_accel_iaa.so 00:03:19.348 SO libspdk_scheduler_gscheduler.so.4.0 00:03:19.348 SO libspdk_accel_dsa.so.5.0 00:03:19.606 SYMLINK libspdk_scheduler_gscheduler.so 00:03:19.606 SYMLINK libspdk_accel_dsa.so 00:03:19.606 LIB libspdk_keyring_linux.a 00:03:19.606 CC module/bdev/error/vbdev_error.o 00:03:19.606 CC module/bdev/delay/vbdev_delay.o 00:03:19.606 CC module/blobfs/bdev/blobfs_bdev.o 00:03:19.606 CC module/bdev/gpt/gpt.o 00:03:19.606 SO libspdk_keyring_linux.so.1.0 00:03:19.606 CC module/bdev/lvol/vbdev_lvol.o 00:03:19.606 CC module/bdev/malloc/bdev_malloc.o 00:03:19.867 CC module/bdev/null/bdev_null.o 00:03:19.867 SYMLINK libspdk_keyring_linux.so 00:03:19.867 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:19.867 LIB libspdk_sock_posix.a 00:03:19.867 CC module/bdev/nvme/bdev_nvme.o 00:03:19.867 SO libspdk_sock_posix.so.6.0 00:03:19.867 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:19.867 CC module/bdev/gpt/vbdev_gpt.o 00:03:20.125 SYMLINK libspdk_sock_posix.so 00:03:20.125 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:20.125 CC module/bdev/error/vbdev_error_rpc.o 00:03:20.125 LIB libspdk_blobfs_bdev.a 00:03:20.125 SO libspdk_blobfs_bdev.so.6.0 00:03:20.125 CC module/bdev/null/bdev_null_rpc.o 00:03:20.125 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:20.383 SYMLINK libspdk_blobfs_bdev.so 00:03:20.383 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:20.383 LIB libspdk_bdev_error.a 00:03:20.383 LIB libspdk_bdev_malloc.a 00:03:20.383 CC module/bdev/nvme/nvme_rpc.o 00:03:20.383 LIB libspdk_bdev_gpt.a 00:03:20.383 SO libspdk_bdev_error.so.6.0 00:03:20.383 SO libspdk_bdev_malloc.so.6.0 00:03:20.383 SO libspdk_bdev_gpt.so.6.0 00:03:20.383 LIB libspdk_bdev_lvol.a 00:03:20.641 LIB libspdk_bdev_null.a 00:03:20.641 CC module/bdev/passthru/vbdev_passthru.o 00:03:20.641 SO libspdk_bdev_lvol.so.6.0 00:03:20.641 SYMLINK libspdk_bdev_gpt.so 00:03:20.641 SYMLINK libspdk_bdev_error.so 00:03:20.641 SYMLINK libspdk_bdev_malloc.so 00:03:20.641 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:20.641 CC module/bdev/nvme/bdev_mdns_client.o 00:03:20.641 SO libspdk_bdev_null.so.6.0 00:03:20.641 LIB libspdk_bdev_delay.a 00:03:20.641 SO libspdk_bdev_delay.so.6.0 00:03:20.641 SYMLINK libspdk_bdev_lvol.so 00:03:20.641 SYMLINK libspdk_bdev_null.so 00:03:20.641 SYMLINK libspdk_bdev_delay.so 00:03:20.900 CC module/bdev/raid/bdev_raid.o 00:03:20.900 CC module/bdev/nvme/vbdev_opal.o 00:03:20.900 LIB libspdk_bdev_passthru.a 00:03:20.900 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:20.900 CC module/bdev/split/vbdev_split.o 00:03:20.900 SO libspdk_bdev_passthru.so.6.0 00:03:20.900 CC module/bdev/aio/bdev_aio.o 00:03:20.900 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:20.900 CC module/bdev/ftl/bdev_ftl.o 00:03:21.158 SYMLINK libspdk_bdev_passthru.so 00:03:21.158 CC module/bdev/aio/bdev_aio_rpc.o 00:03:21.158 CC module/bdev/split/vbdev_split_rpc.o 00:03:21.158 CC module/bdev/raid/bdev_raid_rpc.o 00:03:21.417 CC module/bdev/raid/bdev_raid_sb.o 00:03:21.417 CC module/bdev/iscsi/bdev_iscsi.o 00:03:21.417 CC module/bdev/raid/raid0.o 00:03:21.417 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:21.417 LIB libspdk_bdev_aio.a 00:03:21.417 LIB libspdk_bdev_split.a 00:03:21.417 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:21.417 SO libspdk_bdev_aio.so.6.0 00:03:21.674 SO libspdk_bdev_split.so.6.0 00:03:21.674 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:21.674 SYMLINK libspdk_bdev_aio.so 00:03:21.674 SYMLINK libspdk_bdev_split.so 00:03:21.674 CC module/bdev/raid/raid1.o 00:03:21.674 LIB libspdk_bdev_zone_block.a 00:03:21.674 CC module/bdev/raid/concat.o 00:03:21.674 LIB libspdk_bdev_ftl.a 00:03:21.932 SO libspdk_bdev_zone_block.so.6.0 00:03:21.932 SO libspdk_bdev_ftl.so.6.0 00:03:21.932 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:21.932 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:21.932 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:21.932 SYMLINK libspdk_bdev_zone_block.so 00:03:21.932 SYMLINK libspdk_bdev_ftl.so 00:03:21.932 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:21.932 LIB libspdk_bdev_iscsi.a 00:03:21.932 SO libspdk_bdev_iscsi.so.6.0 00:03:21.932 SYMLINK libspdk_bdev_iscsi.so 00:03:22.497 LIB libspdk_bdev_raid.a 00:03:22.497 SO libspdk_bdev_raid.so.6.0 00:03:22.497 LIB libspdk_bdev_virtio.a 00:03:22.497 SYMLINK libspdk_bdev_raid.so 00:03:22.497 SO libspdk_bdev_virtio.so.6.0 00:03:22.755 SYMLINK libspdk_bdev_virtio.so 00:03:23.319 LIB libspdk_bdev_nvme.a 00:03:23.319 SO libspdk_bdev_nvme.so.7.0 00:03:23.577 SYMLINK libspdk_bdev_nvme.so 00:03:23.834 CC module/event/subsystems/vmd/vmd.o 00:03:23.834 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:23.834 CC module/event/subsystems/sock/sock.o 00:03:23.834 CC module/event/subsystems/scheduler/scheduler.o 00:03:23.834 CC module/event/subsystems/keyring/keyring.o 00:03:23.834 CC module/event/subsystems/iobuf/iobuf.o 00:03:23.834 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:23.834 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:24.092 LIB libspdk_event_vhost_blk.a 00:03:24.092 LIB libspdk_event_keyring.a 00:03:24.092 LIB libspdk_event_scheduler.a 00:03:24.092 SO libspdk_event_vhost_blk.so.3.0 00:03:24.092 SO libspdk_event_keyring.so.1.0 00:03:24.092 SO libspdk_event_scheduler.so.4.0 00:03:24.092 LIB libspdk_event_vmd.a 00:03:24.092 LIB libspdk_event_iobuf.a 00:03:24.092 LIB libspdk_event_sock.a 00:03:24.092 SYMLINK libspdk_event_vhost_blk.so 00:03:24.092 SYMLINK libspdk_event_keyring.so 00:03:24.092 SO libspdk_event_sock.so.5.0 00:03:24.092 SO libspdk_event_vmd.so.6.0 00:03:24.092 SO libspdk_event_iobuf.so.3.0 00:03:24.375 SYMLINK libspdk_event_scheduler.so 00:03:24.375 SYMLINK libspdk_event_sock.so 00:03:24.375 SYMLINK libspdk_event_iobuf.so 00:03:24.375 SYMLINK libspdk_event_vmd.so 00:03:24.633 CC module/event/subsystems/accel/accel.o 00:03:24.890 LIB libspdk_event_accel.a 00:03:24.890 SO libspdk_event_accel.so.6.0 00:03:24.890 SYMLINK libspdk_event_accel.so 00:03:25.148 CC module/event/subsystems/bdev/bdev.o 00:03:25.405 LIB libspdk_event_bdev.a 00:03:25.405 SO libspdk_event_bdev.so.6.0 00:03:25.405 SYMLINK libspdk_event_bdev.so 00:03:25.663 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:25.663 CC module/event/subsystems/ublk/ublk.o 00:03:25.663 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:25.663 CC module/event/subsystems/nbd/nbd.o 00:03:25.663 CC module/event/subsystems/scsi/scsi.o 00:03:25.920 LIB libspdk_event_nbd.a 00:03:25.920 LIB libspdk_event_scsi.a 00:03:25.920 LIB libspdk_event_ublk.a 00:03:25.920 SO libspdk_event_nbd.so.6.0 00:03:25.920 SO libspdk_event_scsi.so.6.0 00:03:25.920 SO libspdk_event_ublk.so.3.0 00:03:25.920 SYMLINK libspdk_event_scsi.so 00:03:25.920 SYMLINK libspdk_event_nbd.so 00:03:25.920 SYMLINK libspdk_event_ublk.so 00:03:25.920 LIB libspdk_event_nvmf.a 00:03:26.177 SO libspdk_event_nvmf.so.6.0 00:03:26.177 SYMLINK libspdk_event_nvmf.so 00:03:26.177 CC module/event/subsystems/iscsi/iscsi.o 00:03:26.177 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:26.435 LIB libspdk_event_vhost_scsi.a 00:03:26.435 LIB libspdk_event_iscsi.a 00:03:26.435 SO libspdk_event_vhost_scsi.so.3.0 00:03:26.435 SO libspdk_event_iscsi.so.6.0 00:03:26.435 SYMLINK libspdk_event_vhost_scsi.so 00:03:26.435 SYMLINK libspdk_event_iscsi.so 00:03:26.693 SO libspdk.so.6.0 00:03:26.693 SYMLINK libspdk.so 00:03:26.950 TEST_HEADER include/spdk/accel.h 00:03:26.950 TEST_HEADER include/spdk/accel_module.h 00:03:26.950 CXX app/trace/trace.o 00:03:26.950 TEST_HEADER include/spdk/assert.h 00:03:26.950 CC app/trace_record/trace_record.o 00:03:26.950 TEST_HEADER include/spdk/barrier.h 00:03:26.950 TEST_HEADER include/spdk/base64.h 00:03:26.950 TEST_HEADER include/spdk/bdev.h 00:03:26.950 TEST_HEADER include/spdk/bdev_module.h 00:03:26.950 TEST_HEADER include/spdk/bdev_zone.h 00:03:26.950 TEST_HEADER include/spdk/bit_array.h 00:03:26.950 TEST_HEADER include/spdk/bit_pool.h 00:03:26.950 TEST_HEADER include/spdk/blob_bdev.h 00:03:26.950 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:26.950 TEST_HEADER include/spdk/blobfs.h 00:03:26.950 TEST_HEADER include/spdk/blob.h 00:03:26.950 TEST_HEADER include/spdk/conf.h 00:03:26.950 TEST_HEADER include/spdk/config.h 00:03:26.951 TEST_HEADER include/spdk/cpuset.h 00:03:26.951 TEST_HEADER include/spdk/crc16.h 00:03:26.951 TEST_HEADER include/spdk/crc32.h 00:03:26.951 TEST_HEADER include/spdk/crc64.h 00:03:26.951 TEST_HEADER include/spdk/dif.h 00:03:26.951 TEST_HEADER include/spdk/dma.h 00:03:26.951 TEST_HEADER include/spdk/endian.h 00:03:26.951 TEST_HEADER include/spdk/env_dpdk.h 00:03:26.951 TEST_HEADER include/spdk/env.h 00:03:26.951 TEST_HEADER include/spdk/event.h 00:03:26.951 TEST_HEADER include/spdk/fd_group.h 00:03:26.951 TEST_HEADER include/spdk/fd.h 00:03:26.951 TEST_HEADER include/spdk/file.h 00:03:26.951 TEST_HEADER include/spdk/ftl.h 00:03:26.951 TEST_HEADER include/spdk/gpt_spec.h 00:03:26.951 TEST_HEADER include/spdk/hexlify.h 00:03:26.951 TEST_HEADER include/spdk/histogram_data.h 00:03:26.951 TEST_HEADER include/spdk/idxd.h 00:03:26.951 CC app/nvmf_tgt/nvmf_main.o 00:03:26.951 TEST_HEADER include/spdk/idxd_spec.h 00:03:26.951 TEST_HEADER include/spdk/init.h 00:03:26.951 CC app/iscsi_tgt/iscsi_tgt.o 00:03:26.951 TEST_HEADER include/spdk/ioat.h 00:03:26.951 TEST_HEADER include/spdk/ioat_spec.h 00:03:26.951 TEST_HEADER include/spdk/iscsi_spec.h 00:03:26.951 TEST_HEADER include/spdk/json.h 00:03:26.951 TEST_HEADER include/spdk/jsonrpc.h 00:03:26.951 TEST_HEADER include/spdk/keyring.h 00:03:26.951 TEST_HEADER include/spdk/keyring_module.h 00:03:26.951 CC test/thread/poller_perf/poller_perf.o 00:03:26.951 TEST_HEADER include/spdk/likely.h 00:03:26.951 CC examples/util/zipf/zipf.o 00:03:26.951 TEST_HEADER include/spdk/log.h 00:03:26.951 TEST_HEADER include/spdk/lvol.h 00:03:26.951 CC examples/ioat/perf/perf.o 00:03:26.951 TEST_HEADER include/spdk/memory.h 00:03:26.951 TEST_HEADER include/spdk/mmio.h 00:03:26.951 TEST_HEADER include/spdk/nbd.h 00:03:26.951 TEST_HEADER include/spdk/notify.h 00:03:27.208 TEST_HEADER include/spdk/nvme.h 00:03:27.208 TEST_HEADER include/spdk/nvme_intel.h 00:03:27.208 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:27.208 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:27.208 TEST_HEADER include/spdk/nvme_spec.h 00:03:27.208 TEST_HEADER include/spdk/nvme_zns.h 00:03:27.208 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:27.208 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:27.208 TEST_HEADER include/spdk/nvmf.h 00:03:27.208 TEST_HEADER include/spdk/nvmf_spec.h 00:03:27.208 TEST_HEADER include/spdk/nvmf_transport.h 00:03:27.208 TEST_HEADER include/spdk/opal.h 00:03:27.208 TEST_HEADER include/spdk/opal_spec.h 00:03:27.208 CC test/dma/test_dma/test_dma.o 00:03:27.208 TEST_HEADER include/spdk/pci_ids.h 00:03:27.208 TEST_HEADER include/spdk/pipe.h 00:03:27.208 CC test/app/bdev_svc/bdev_svc.o 00:03:27.208 TEST_HEADER include/spdk/queue.h 00:03:27.208 TEST_HEADER include/spdk/reduce.h 00:03:27.208 TEST_HEADER include/spdk/rpc.h 00:03:27.208 TEST_HEADER include/spdk/scheduler.h 00:03:27.208 TEST_HEADER include/spdk/scsi.h 00:03:27.208 TEST_HEADER include/spdk/scsi_spec.h 00:03:27.208 TEST_HEADER include/spdk/sock.h 00:03:27.208 TEST_HEADER include/spdk/stdinc.h 00:03:27.208 TEST_HEADER include/spdk/string.h 00:03:27.208 TEST_HEADER include/spdk/thread.h 00:03:27.208 TEST_HEADER include/spdk/trace.h 00:03:27.208 TEST_HEADER include/spdk/trace_parser.h 00:03:27.208 TEST_HEADER include/spdk/tree.h 00:03:27.208 TEST_HEADER include/spdk/ublk.h 00:03:27.208 TEST_HEADER include/spdk/util.h 00:03:27.208 TEST_HEADER include/spdk/uuid.h 00:03:27.208 TEST_HEADER include/spdk/version.h 00:03:27.208 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:27.208 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:27.208 TEST_HEADER include/spdk/vhost.h 00:03:27.208 TEST_HEADER include/spdk/vmd.h 00:03:27.208 TEST_HEADER include/spdk/xor.h 00:03:27.208 TEST_HEADER include/spdk/zipf.h 00:03:27.208 CXX test/cpp_headers/accel.o 00:03:27.208 LINK zipf 00:03:27.208 LINK poller_perf 00:03:27.466 LINK spdk_trace_record 00:03:27.466 LINK ioat_perf 00:03:27.466 CXX test/cpp_headers/accel_module.o 00:03:27.466 LINK iscsi_tgt 00:03:27.466 LINK nvmf_tgt 00:03:27.466 LINK bdev_svc 00:03:27.724 LINK spdk_trace 00:03:27.724 CXX test/cpp_headers/assert.o 00:03:27.724 CC examples/ioat/verify/verify.o 00:03:27.724 CC test/env/vtophys/vtophys.o 00:03:27.724 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:27.724 LINK test_dma 00:03:27.724 CC test/env/mem_callbacks/mem_callbacks.o 00:03:27.724 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:27.981 CC test/env/memory/memory_ut.o 00:03:27.981 LINK vtophys 00:03:27.981 CXX test/cpp_headers/barrier.o 00:03:27.981 CC test/env/pci/pci_ut.o 00:03:27.981 LINK env_dpdk_post_init 00:03:27.981 LINK verify 00:03:28.239 CC app/spdk_tgt/spdk_tgt.o 00:03:28.239 CXX test/cpp_headers/base64.o 00:03:28.239 CC test/app/histogram_perf/histogram_perf.o 00:03:28.239 CC test/app/jsoncat/jsoncat.o 00:03:28.497 CC test/app/stub/stub.o 00:03:28.497 LINK mem_callbacks 00:03:28.497 CXX test/cpp_headers/bdev.o 00:03:28.497 LINK nvme_fuzz 00:03:28.497 LINK spdk_tgt 00:03:28.497 LINK histogram_perf 00:03:28.497 LINK jsoncat 00:03:28.755 LINK stub 00:03:28.755 LINK pci_ut 00:03:28.755 CXX test/cpp_headers/bdev_module.o 00:03:28.755 CC app/spdk_lspci/spdk_lspci.o 00:03:29.013 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:29.013 CC app/spdk_nvme_perf/perf.o 00:03:29.013 CC test/rpc_client/rpc_client_test.o 00:03:29.013 LINK spdk_lspci 00:03:29.270 LINK memory_ut 00:03:29.270 LINK rpc_client_test 00:03:29.270 CXX test/cpp_headers/bdev_zone.o 00:03:29.270 CC test/accel/dif/dif.o 00:03:29.270 CC test/blobfs/mkfs/mkfs.o 00:03:29.271 CC test/event/event_perf/event_perf.o 00:03:29.527 CXX test/cpp_headers/bit_array.o 00:03:29.527 CC test/event/reactor/reactor.o 00:03:29.527 LINK mkfs 00:03:29.527 CXX test/cpp_headers/bit_pool.o 00:03:29.810 CC test/event/reactor_perf/reactor_perf.o 00:03:29.810 LINK reactor 00:03:29.810 LINK event_perf 00:03:29.810 CC test/event/app_repeat/app_repeat.o 00:03:30.067 CXX test/cpp_headers/blob_bdev.o 00:03:30.067 LINK reactor_perf 00:03:30.067 LINK spdk_nvme_perf 00:03:30.067 LINK dif 00:03:30.067 LINK app_repeat 00:03:30.325 CXX test/cpp_headers/blobfs_bdev.o 00:03:30.325 CC test/event/scheduler/scheduler.o 00:03:30.582 CC test/nvme/aer/aer.o 00:03:30.582 CXX test/cpp_headers/blobfs.o 00:03:30.582 CC app/spdk_nvme_identify/identify.o 00:03:30.582 CC test/lvol/esnap/esnap.o 00:03:30.582 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:30.582 CC test/nvme/reset/reset.o 00:03:30.582 CC test/nvme/sgl/sgl.o 00:03:30.840 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:30.840 CXX test/cpp_headers/blob.o 00:03:30.840 LINK aer 00:03:30.840 LINK scheduler 00:03:31.098 LINK reset 00:03:31.098 CXX test/cpp_headers/conf.o 00:03:31.098 LINK sgl 00:03:31.354 CXX test/cpp_headers/config.o 00:03:31.354 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:31.354 CXX test/cpp_headers/cpuset.o 00:03:31.612 LINK vhost_fuzz 00:03:31.612 CC test/nvme/e2edp/nvme_dp.o 00:03:31.612 CC examples/sock/hello_world/hello_sock.o 00:03:31.612 CC examples/thread/thread/thread_ex.o 00:03:31.869 LINK iscsi_fuzz 00:03:31.869 CXX test/cpp_headers/crc16.o 00:03:31.869 LINK interrupt_tgt 00:03:32.127 LINK spdk_nvme_identify 00:03:32.127 CXX test/cpp_headers/crc32.o 00:03:32.127 CC examples/vmd/lsvmd/lsvmd.o 00:03:32.127 LINK hello_sock 00:03:32.385 LINK nvme_dp 00:03:32.385 LINK thread 00:03:32.385 CXX test/cpp_headers/crc64.o 00:03:32.642 LINK lsvmd 00:03:32.642 CC app/spdk_top/spdk_top.o 00:03:32.642 CC app/spdk_nvme_discover/discovery_aer.o 00:03:32.642 CC test/bdev/bdevio/bdevio.o 00:03:32.642 CXX test/cpp_headers/dif.o 00:03:32.901 CC test/nvme/overhead/overhead.o 00:03:32.901 CC examples/idxd/perf/perf.o 00:03:32.901 LINK spdk_nvme_discover 00:03:32.901 CC examples/vmd/led/led.o 00:03:32.901 CC examples/nvme/hello_world/hello_world.o 00:03:32.901 CXX test/cpp_headers/dma.o 00:03:33.159 LINK led 00:03:33.159 CXX test/cpp_headers/endian.o 00:03:33.417 LINK overhead 00:03:33.417 LINK idxd_perf 00:03:33.417 LINK hello_world 00:03:33.417 LINK bdevio 00:03:33.417 CXX test/cpp_headers/env_dpdk.o 00:03:33.674 CXX test/cpp_headers/env.o 00:03:33.674 CC examples/accel/perf/accel_perf.o 00:03:33.932 CC test/nvme/err_injection/err_injection.o 00:03:33.932 CXX test/cpp_headers/event.o 00:03:33.932 CXX test/cpp_headers/fd_group.o 00:03:34.189 CC examples/nvme/reconnect/reconnect.o 00:03:34.189 CC app/vhost/vhost.o 00:03:34.189 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:34.446 LINK err_injection 00:03:34.446 LINK spdk_top 00:03:34.446 CXX test/cpp_headers/fd.o 00:03:34.446 LINK vhost 00:03:34.703 LINK reconnect 00:03:34.703 CC examples/nvme/arbitration/arbitration.o 00:03:34.703 LINK accel_perf 00:03:34.961 CXX test/cpp_headers/file.o 00:03:34.961 CC test/nvme/startup/startup.o 00:03:34.961 CC test/nvme/reserve/reserve.o 00:03:35.219 CC examples/nvme/hotplug/hotplug.o 00:03:35.219 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:35.219 LINK startup 00:03:35.219 CC app/spdk_dd/spdk_dd.o 00:03:35.219 CXX test/cpp_headers/ftl.o 00:03:35.219 LINK reserve 00:03:35.219 LINK nvme_manage 00:03:35.219 LINK arbitration 00:03:35.477 LINK cmb_copy 00:03:35.477 LINK hotplug 00:03:35.477 CXX test/cpp_headers/gpt_spec.o 00:03:35.734 CC examples/nvme/abort/abort.o 00:03:35.734 CC test/nvme/simple_copy/simple_copy.o 00:03:35.734 CXX test/cpp_headers/hexlify.o 00:03:35.734 CC examples/blob/hello_world/hello_blob.o 00:03:35.992 CC app/fio/nvme/fio_plugin.o 00:03:35.992 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:35.992 LINK spdk_dd 00:03:35.992 CXX test/cpp_headers/histogram_data.o 00:03:35.992 CC examples/bdev/hello_world/hello_bdev.o 00:03:35.992 LINK simple_copy 00:03:36.250 LINK hello_blob 00:03:36.250 LINK pmr_persistence 00:03:36.250 LINK abort 00:03:36.250 LINK hello_bdev 00:03:36.250 CXX test/cpp_headers/idxd.o 00:03:36.507 CC test/nvme/connect_stress/connect_stress.o 00:03:36.507 CC test/nvme/boot_partition/boot_partition.o 00:03:36.507 CXX test/cpp_headers/idxd_spec.o 00:03:36.507 CXX test/cpp_headers/init.o 00:03:36.507 LINK spdk_nvme 00:03:36.507 CC examples/blob/cli/blobcli.o 00:03:36.764 LINK connect_stress 00:03:36.764 LINK boot_partition 00:03:36.764 CC app/fio/bdev/fio_plugin.o 00:03:36.764 CXX test/cpp_headers/ioat.o 00:03:36.764 CC examples/bdev/bdevperf/bdevperf.o 00:03:36.764 CXX test/cpp_headers/ioat_spec.o 00:03:37.021 CC test/nvme/compliance/nvme_compliance.o 00:03:37.021 CC test/nvme/fused_ordering/fused_ordering.o 00:03:37.021 CXX test/cpp_headers/iscsi_spec.o 00:03:37.278 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:37.278 CC test/nvme/fdp/fdp.o 00:03:37.278 LINK blobcli 00:03:37.278 LINK nvme_compliance 00:03:37.278 CXX test/cpp_headers/json.o 00:03:37.278 LINK spdk_bdev 00:03:37.278 LINK fused_ordering 00:03:37.536 LINK doorbell_aers 00:03:37.536 CXX test/cpp_headers/jsonrpc.o 00:03:37.536 CXX test/cpp_headers/keyring.o 00:03:37.536 LINK bdevperf 00:03:37.794 CXX test/cpp_headers/keyring_module.o 00:03:37.794 LINK fdp 00:03:37.794 CC test/nvme/cuse/cuse.o 00:03:37.794 CXX test/cpp_headers/likely.o 00:03:37.794 CXX test/cpp_headers/log.o 00:03:37.794 CXX test/cpp_headers/lvol.o 00:03:37.794 CXX test/cpp_headers/memory.o 00:03:38.052 CXX test/cpp_headers/mmio.o 00:03:38.052 CXX test/cpp_headers/nbd.o 00:03:38.052 CXX test/cpp_headers/notify.o 00:03:38.052 CXX test/cpp_headers/nvme.o 00:03:38.052 CXX test/cpp_headers/nvme_intel.o 00:03:38.052 CXX test/cpp_headers/nvme_ocssd.o 00:03:38.310 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:38.310 CXX test/cpp_headers/nvme_spec.o 00:03:38.310 CXX test/cpp_headers/nvme_zns.o 00:03:38.310 CC examples/nvmf/nvmf/nvmf.o 00:03:38.310 CXX test/cpp_headers/nvmf_cmd.o 00:03:38.310 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:38.310 CXX test/cpp_headers/nvmf.o 00:03:38.568 CXX test/cpp_headers/nvmf_spec.o 00:03:38.568 CXX test/cpp_headers/nvmf_transport.o 00:03:38.568 CXX test/cpp_headers/opal.o 00:03:38.568 CXX test/cpp_headers/opal_spec.o 00:03:38.568 CXX test/cpp_headers/pci_ids.o 00:03:38.568 CXX test/cpp_headers/pipe.o 00:03:38.568 CXX test/cpp_headers/queue.o 00:03:38.826 CXX test/cpp_headers/reduce.o 00:03:38.826 CXX test/cpp_headers/rpc.o 00:03:38.826 CXX test/cpp_headers/scheduler.o 00:03:38.826 CXX test/cpp_headers/scsi.o 00:03:38.826 LINK nvmf 00:03:38.826 CXX test/cpp_headers/scsi_spec.o 00:03:38.826 CXX test/cpp_headers/sock.o 00:03:38.826 CXX test/cpp_headers/stdinc.o 00:03:38.826 CXX test/cpp_headers/string.o 00:03:38.826 CXX test/cpp_headers/thread.o 00:03:39.083 CXX test/cpp_headers/trace.o 00:03:39.083 CXX test/cpp_headers/trace_parser.o 00:03:39.083 CXX test/cpp_headers/tree.o 00:03:39.083 CXX test/cpp_headers/ublk.o 00:03:39.083 CXX test/cpp_headers/util.o 00:03:39.083 CXX test/cpp_headers/uuid.o 00:03:39.083 LINK cuse 00:03:39.083 CXX test/cpp_headers/version.o 00:03:39.083 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.341 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.341 CXX test/cpp_headers/vhost.o 00:03:39.341 CXX test/cpp_headers/vmd.o 00:03:39.341 LINK esnap 00:03:39.341 CXX test/cpp_headers/xor.o 00:03:39.341 CXX test/cpp_headers/zipf.o 00:03:39.907 00:03:39.907 real 1m33.713s 00:03:39.907 user 10m45.753s 00:03:39.907 sys 2m5.920s 00:03:39.907 ************************************ 00:03:39.907 END TEST make 00:03:39.907 ************************************ 00:03:39.907 14:20:19 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:39.907 14:20:19 make -- common/autotest_common.sh@10 -- $ set +x 00:03:39.907 14:20:19 -- common/autotest_common.sh@1142 -- $ return 0 00:03:39.907 14:20:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:39.907 14:20:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:39.907 14:20:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:39.907 14:20:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.907 14:20:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:39.907 14:20:19 -- pm/common@44 -- $ pid=5199 00:03:39.907 14:20:19 -- pm/common@50 -- $ kill -TERM 5199 00:03:39.907 14:20:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:39.907 14:20:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:39.907 14:20:19 -- pm/common@44 -- $ pid=5201 00:03:39.907 14:20:19 -- pm/common@50 -- $ kill -TERM 5201 00:03:39.907 14:20:19 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:39.907 14:20:19 -- nvmf/common.sh@7 -- # uname -s 00:03:39.907 14:20:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:39.907 14:20:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:39.907 14:20:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:39.907 14:20:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:39.907 14:20:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:39.907 14:20:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:39.907 14:20:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:39.907 14:20:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:39.907 14:20:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:39.907 14:20:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:39.907 14:20:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:03:39.907 14:20:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:03:39.907 14:20:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:39.907 14:20:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:39.907 14:20:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:39.907 14:20:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:39.907 14:20:19 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:39.907 14:20:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:39.907 14:20:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:39.907 14:20:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:39.907 14:20:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.907 14:20:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.907 14:20:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.907 14:20:19 -- paths/export.sh@5 -- # export PATH 00:03:39.907 14:20:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.907 14:20:19 -- nvmf/common.sh@47 -- # : 0 00:03:39.907 14:20:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:39.907 14:20:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:39.907 14:20:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:39.907 14:20:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:39.907 14:20:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:39.907 14:20:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:39.907 14:20:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:39.907 14:20:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:39.907 14:20:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:39.907 14:20:19 -- spdk/autotest.sh@32 -- # uname -s 00:03:39.907 14:20:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:39.907 14:20:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:39.907 14:20:19 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:39.907 14:20:19 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:39.907 14:20:19 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:39.907 14:20:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:40.165 14:20:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:40.165 14:20:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:40.165 14:20:19 -- spdk/autotest.sh@48 -- # udevadm_pid=54823 00:03:40.165 14:20:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:40.165 14:20:19 -- pm/common@17 -- # local monitor 00:03:40.165 14:20:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.165 14:20:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:40.165 14:20:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.165 14:20:19 -- pm/common@25 -- # sleep 1 00:03:40.165 14:20:19 -- pm/common@21 -- # date +%s 00:03:40.165 14:20:19 -- pm/common@21 -- # date +%s 00:03:40.165 14:20:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721053219 00:03:40.165 14:20:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721053219 00:03:40.165 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721053219_collect-cpu-load.pm.log 00:03:40.165 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721053219_collect-vmstat.pm.log 00:03:41.100 14:20:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:41.100 14:20:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:41.100 14:20:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:41.100 14:20:20 -- common/autotest_common.sh@10 -- # set +x 00:03:41.100 14:20:20 -- spdk/autotest.sh@59 -- # create_test_list 00:03:41.100 14:20:20 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:41.100 14:20:20 -- common/autotest_common.sh@10 -- # set +x 00:03:41.100 14:20:20 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:41.100 14:20:20 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:41.100 14:20:20 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:41.100 14:20:20 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:41.100 14:20:20 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:41.100 14:20:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:41.100 14:20:20 -- common/autotest_common.sh@1455 -- # uname 00:03:41.100 14:20:20 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:41.100 14:20:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:41.100 14:20:20 -- common/autotest_common.sh@1475 -- # uname 00:03:41.100 14:20:20 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:41.100 14:20:20 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:41.100 14:20:20 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:41.100 14:20:20 -- spdk/autotest.sh@72 -- # hash lcov 00:03:41.100 14:20:20 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:41.100 14:20:20 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:41.100 --rc lcov_branch_coverage=1 00:03:41.100 --rc lcov_function_coverage=1 00:03:41.100 --rc genhtml_branch_coverage=1 00:03:41.100 --rc genhtml_function_coverage=1 00:03:41.100 --rc genhtml_legend=1 00:03:41.100 --rc geninfo_all_blocks=1 00:03:41.100 ' 00:03:41.100 14:20:20 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:41.100 --rc lcov_branch_coverage=1 00:03:41.100 --rc lcov_function_coverage=1 00:03:41.100 --rc genhtml_branch_coverage=1 00:03:41.100 --rc genhtml_function_coverage=1 00:03:41.100 --rc genhtml_legend=1 00:03:41.100 --rc geninfo_all_blocks=1 00:03:41.100 ' 00:03:41.100 14:20:20 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:41.100 --rc lcov_branch_coverage=1 00:03:41.100 --rc lcov_function_coverage=1 00:03:41.100 --rc genhtml_branch_coverage=1 00:03:41.100 --rc genhtml_function_coverage=1 00:03:41.100 --rc genhtml_legend=1 00:03:41.100 --rc geninfo_all_blocks=1 00:03:41.100 --no-external' 00:03:41.100 14:20:20 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:41.100 --rc lcov_branch_coverage=1 00:03:41.100 --rc lcov_function_coverage=1 00:03:41.100 --rc genhtml_branch_coverage=1 00:03:41.100 --rc genhtml_function_coverage=1 00:03:41.100 --rc genhtml_legend=1 00:03:41.100 --rc geninfo_all_blocks=1 00:03:41.100 --no-external' 00:03:41.100 14:20:20 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:41.100 lcov: LCOV version 1.14 00:03:41.100 14:20:20 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:59.272 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:59.272 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:11.522 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:11.522 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:11.780 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:11.780 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:12.039 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:12.039 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:12.039 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:12.039 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:12.039 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:12.039 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:16.245 14:20:55 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:16.245 14:20:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:16.245 14:20:55 -- common/autotest_common.sh@10 -- # set +x 00:04:16.245 14:20:55 -- spdk/autotest.sh@91 -- # rm -f 00:04:16.245 14:20:55 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:16.245 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.502 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:16.502 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:16.502 14:20:55 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:16.502 14:20:55 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:16.502 14:20:55 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:16.502 14:20:55 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:16.502 14:20:55 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:16.502 14:20:55 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:16.502 14:20:55 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:16.502 14:20:55 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.502 14:20:55 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:16.502 14:20:55 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:16.502 14:20:55 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:16.502 14:20:55 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:16.502 14:20:55 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:16.502 14:20:55 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:16.502 14:20:55 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:16.502 14:20:55 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:16.502 14:20:55 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:16.502 14:20:55 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:16.502 14:20:55 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:16.502 14:20:55 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:16.502 14:20:55 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:16.502 14:20:55 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:16.502 14:20:55 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:16.502 14:20:55 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:16.502 14:20:55 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:16.502 14:20:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.502 14:20:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:16.502 14:20:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:16.502 14:20:55 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:16.503 14:20:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:16.503 No valid GPT data, bailing 00:04:16.503 14:20:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.503 14:20:55 -- scripts/common.sh@391 -- # pt= 00:04:16.503 14:20:55 -- scripts/common.sh@392 -- # return 1 00:04:16.503 14:20:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:16.503 1+0 records in 00:04:16.503 1+0 records out 00:04:16.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00310595 s, 338 MB/s 00:04:16.503 14:20:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.503 14:20:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:16.503 14:20:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:16.503 14:20:55 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:16.503 14:20:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:16.503 No valid GPT data, bailing 00:04:16.503 14:20:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:16.503 14:20:56 -- scripts/common.sh@391 -- # pt= 00:04:16.503 14:20:56 -- scripts/common.sh@392 -- # return 1 00:04:16.503 14:20:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:16.503 1+0 records in 00:04:16.503 1+0 records out 00:04:16.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00382733 s, 274 MB/s 00:04:16.503 14:20:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.503 14:20:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:16.503 14:20:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:16.503 14:20:56 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:16.503 14:20:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:16.760 No valid GPT data, bailing 00:04:16.760 14:20:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:16.760 14:20:56 -- scripts/common.sh@391 -- # pt= 00:04:16.760 14:20:56 -- scripts/common.sh@392 -- # return 1 00:04:16.760 14:20:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:16.760 1+0 records in 00:04:16.760 1+0 records out 00:04:16.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423457 s, 248 MB/s 00:04:16.760 14:20:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.760 14:20:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:16.760 14:20:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:16.760 14:20:56 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:16.760 14:20:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:16.760 No valid GPT data, bailing 00:04:16.760 14:20:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:16.760 14:20:56 -- scripts/common.sh@391 -- # pt= 00:04:16.760 14:20:56 -- scripts/common.sh@392 -- # return 1 00:04:16.760 14:20:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:16.760 1+0 records in 00:04:16.760 1+0 records out 00:04:16.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00345991 s, 303 MB/s 00:04:16.760 14:20:56 -- spdk/autotest.sh@118 -- # sync 00:04:16.760 14:20:56 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.760 14:20:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.760 14:20:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:18.657 14:20:57 -- spdk/autotest.sh@124 -- # uname -s 00:04:18.657 14:20:57 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:18.657 14:20:57 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:18.657 14:20:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.657 14:20:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.657 14:20:57 -- common/autotest_common.sh@10 -- # set +x 00:04:18.657 ************************************ 00:04:18.657 START TEST setup.sh 00:04:18.657 ************************************ 00:04:18.657 14:20:57 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:18.657 * Looking for test storage... 00:04:18.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:18.657 14:20:58 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:18.657 14:20:58 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:18.657 14:20:58 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:18.657 14:20:58 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.657 14:20:58 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.657 14:20:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:18.657 ************************************ 00:04:18.657 START TEST acl 00:04:18.657 ************************************ 00:04:18.657 14:20:58 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:18.657 * Looking for test storage... 00:04:18.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:18.657 14:20:58 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:18.657 14:20:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:18.657 14:20:58 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:18.657 14:20:58 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:18.657 14:20:58 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.657 14:20:58 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:18.658 14:20:58 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:18.658 14:20:58 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:18.658 14:20:58 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:18.658 14:20:58 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:18.658 14:20:58 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:18.658 14:20:58 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:18.658 14:20:58 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.658 14:20:58 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.222 14:20:58 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:19.223 14:20:58 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:19.223 14:20:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.223 14:20:58 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:19.223 14:20:58 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.223 14:20:58 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:19.787 14:20:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:19.787 14:20:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:19.787 14:20:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.787 Hugepages 00:04:19.787 node hugesize free / total 00:04:19.787 14:20:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:19.787 14:20:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:19.787 14:20:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.044 00:04:20.044 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:20.044 14:20:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:20.044 14:20:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:20.044 14:20:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.044 14:20:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:20.044 14:20:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:20.044 14:20:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.044 14:20:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.044 14:20:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:20.044 14:20:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:20.044 14:20:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:20.044 14:20:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:20.045 14:20:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:20.045 14:20:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.045 14:20:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:20.045 14:20:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:20.045 14:20:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:20.045 14:20:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:20.045 14:20:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:20.045 14:20:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.045 14:20:59 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:20.045 14:20:59 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:20.045 14:20:59 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.045 14:20:59 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.045 14:20:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:20.045 ************************************ 00:04:20.045 START TEST denied 00:04:20.045 ************************************ 00:04:20.045 14:20:59 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:20.045 14:20:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:20.045 14:20:59 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:20.045 14:20:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:20.045 14:20:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.045 14:20:59 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:20.977 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:20.977 14:21:00 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:20.977 14:21:00 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:20.977 14:21:00 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:20.977 14:21:00 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:20.977 14:21:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:20.977 14:21:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:20.977 14:21:00 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:20.977 14:21:00 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:20.977 14:21:00 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.977 14:21:00 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.542 00:04:21.542 real 0m1.416s 00:04:21.542 user 0m0.529s 00:04:21.542 sys 0m0.803s 00:04:21.542 14:21:01 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.542 14:21:01 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:21.542 ************************************ 00:04:21.542 END TEST denied 00:04:21.542 ************************************ 00:04:21.542 14:21:01 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:21.542 14:21:01 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:21.542 14:21:01 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.542 14:21:01 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.542 14:21:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:21.542 ************************************ 00:04:21.542 START TEST allowed 00:04:21.542 ************************************ 00:04:21.542 14:21:01 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:21.542 14:21:01 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:21.542 14:21:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:21.542 14:21:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:21.542 14:21:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.542 14:21:01 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:22.475 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.475 14:21:01 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:22.475 14:21:01 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:22.475 14:21:01 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:22.475 14:21:01 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:22.475 14:21:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:22.475 14:21:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:22.475 14:21:01 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:22.475 14:21:01 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:22.475 14:21:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.475 14:21:01 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.040 00:04:23.040 real 0m1.466s 00:04:23.040 user 0m0.631s 00:04:23.040 sys 0m0.834s 00:04:23.040 14:21:02 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.040 ************************************ 00:04:23.040 END TEST allowed 00:04:23.040 14:21:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:23.040 ************************************ 00:04:23.041 14:21:02 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:23.041 00:04:23.041 real 0m4.542s 00:04:23.041 user 0m1.909s 00:04:23.041 sys 0m2.555s 00:04:23.041 14:21:02 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.041 14:21:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:23.041 ************************************ 00:04:23.041 END TEST acl 00:04:23.041 ************************************ 00:04:23.041 14:21:02 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:23.041 14:21:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:23.041 14:21:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.041 14:21:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.041 14:21:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:23.041 ************************************ 00:04:23.041 START TEST hugepages 00:04:23.041 ************************************ 00:04:23.041 14:21:02 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:23.300 * Looking for test storage... 00:04:23.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5887592 kB' 'MemAvailable: 7395504 kB' 'Buffers: 2436 kB' 'Cached: 1719588 kB' 'SwapCached: 0 kB' 'Active: 476412 kB' 'Inactive: 1349352 kB' 'Active(anon): 114228 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349352 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 105592 kB' 'Mapped: 48740 kB' 'Shmem: 10488 kB' 'KReclaimable: 67076 kB' 'Slab: 140528 kB' 'SReclaimable: 67076 kB' 'SUnreclaim: 73452 kB' 'KernelStack: 6352 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 334172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.300 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.301 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:23.302 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.302 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.302 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:23.302 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:23.302 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:23.302 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.302 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.302 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:23.302 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:23.302 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:23.302 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:23.302 14:21:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:23.302 14:21:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.302 14:21:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.302 14:21:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.302 ************************************ 00:04:23.302 START TEST default_setup 00:04:23.302 ************************************ 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.302 14:21:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.128 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.128 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.128 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:24.128 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:24.128 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.128 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7886804 kB' 'MemAvailable: 9394552 kB' 'Buffers: 2436 kB' 'Cached: 1719580 kB' 'SwapCached: 0 kB' 'Active: 493700 kB' 'Inactive: 1349352 kB' 'Active(anon): 131516 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349352 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 380 kB' 'Writeback: 0 kB' 'AnonPages: 122540 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140216 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73468 kB' 'KernelStack: 6304 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.129 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.130 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7887604 kB' 'MemAvailable: 9395356 kB' 'Buffers: 2436 kB' 'Cached: 1719580 kB' 'SwapCached: 0 kB' 'Active: 493700 kB' 'Inactive: 1349356 kB' 'Active(anon): 131516 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122512 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140212 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73464 kB' 'KernelStack: 6272 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.131 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.132 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7887892 kB' 'MemAvailable: 9395656 kB' 'Buffers: 2436 kB' 'Cached: 1719580 kB' 'SwapCached: 0 kB' 'Active: 493392 kB' 'Inactive: 1349368 kB' 'Active(anon): 131208 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122312 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140120 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73372 kB' 'KernelStack: 6304 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.133 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.134 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:24.135 nr_hugepages=1024 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.135 resv_hugepages=0 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.135 surplus_hugepages=0 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.135 anon_hugepages=0 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7887892 kB' 'MemAvailable: 9395656 kB' 'Buffers: 2436 kB' 'Cached: 1719580 kB' 'SwapCached: 0 kB' 'Active: 493312 kB' 'Inactive: 1349368 kB' 'Active(anon): 131128 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 122240 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140120 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73372 kB' 'KernelStack: 6304 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.135 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.136 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.394 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7887892 kB' 'MemUsed: 4354080 kB' 'SwapCached: 0 kB' 'Active: 493288 kB' 'Inactive: 1349368 kB' 'Active(anon): 131104 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349368 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1722016 kB' 'Mapped: 48736 kB' 'AnonPages: 122216 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66748 kB' 'Slab: 140120 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.395 node0=1024 expecting 1024 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.395 00:04:24.395 real 0m1.017s 00:04:24.395 user 0m0.479s 00:04:24.395 sys 0m0.473s 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.395 14:21:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:24.396 ************************************ 00:04:24.396 END TEST default_setup 00:04:24.396 ************************************ 00:04:24.396 14:21:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:24.396 14:21:03 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:24.396 14:21:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.396 14:21:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.396 14:21:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.396 ************************************ 00:04:24.396 START TEST per_node_1G_alloc 00:04:24.396 ************************************ 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.396 14:21:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.657 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.657 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8936756 kB' 'MemAvailable: 10444524 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493972 kB' 'Inactive: 1349372 kB' 'Active(anon): 131788 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122708 kB' 'Mapped: 48868 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140080 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73332 kB' 'KernelStack: 6340 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.657 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8937616 kB' 'MemAvailable: 10445384 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493420 kB' 'Inactive: 1349372 kB' 'Active(anon): 131236 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122164 kB' 'Mapped: 48864 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140076 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73328 kB' 'KernelStack: 6308 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.658 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.659 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8937364 kB' 'MemAvailable: 10445132 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493420 kB' 'Inactive: 1349372 kB' 'Active(anon): 131236 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122384 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140076 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73328 kB' 'KernelStack: 6336 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.660 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.661 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.662 nr_hugepages=512 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:24.662 resv_hugepages=0 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.662 surplus_hugepages=0 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.662 anon_hugepages=0 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8937364 kB' 'MemAvailable: 10445132 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493384 kB' 'Inactive: 1349372 kB' 'Active(anon): 131200 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122360 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140076 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73328 kB' 'KernelStack: 6320 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.662 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.663 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.922 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.922 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.922 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.922 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.922 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.923 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8937112 kB' 'MemUsed: 3304860 kB' 'SwapCached: 0 kB' 'Active: 493168 kB' 'Inactive: 1349372 kB' 'Active(anon): 130984 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1722020 kB' 'Mapped: 48744 kB' 'AnonPages: 122364 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66748 kB' 'Slab: 140068 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.924 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.925 node0=512 expecting 512 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:24.925 00:04:24.925 real 0m0.501s 00:04:24.925 user 0m0.249s 00:04:24.925 sys 0m0.283s 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.925 14:21:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.925 ************************************ 00:04:24.925 END TEST per_node_1G_alloc 00:04:24.925 ************************************ 00:04:24.925 14:21:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:24.925 14:21:04 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:24.925 14:21:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.925 14:21:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.925 14:21:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.925 ************************************ 00:04:24.925 START TEST even_2G_alloc 00:04:24.925 ************************************ 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.925 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:24.926 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.926 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:24.926 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:24.926 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:24.926 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.926 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:24.926 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:24.926 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:24.926 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.926 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.185 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.185 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.185 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899928 kB' 'MemAvailable: 9407696 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493352 kB' 'Inactive: 1349372 kB' 'Active(anon): 131168 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 122504 kB' 'Mapped: 48860 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140196 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73448 kB' 'KernelStack: 6276 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.186 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7900208 kB' 'MemAvailable: 9407976 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493136 kB' 'Inactive: 1349372 kB' 'Active(anon): 130952 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122344 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140200 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73452 kB' 'KernelStack: 6320 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.187 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.188 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.453 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7900208 kB' 'MemAvailable: 9407976 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493124 kB' 'Inactive: 1349372 kB' 'Active(anon): 130940 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122052 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140200 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73452 kB' 'KernelStack: 6304 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.454 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.455 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.456 nr_hugepages=1024 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.456 resv_hugepages=0 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.456 surplus_hugepages=0 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.456 anon_hugepages=0 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7900208 kB' 'MemAvailable: 9407976 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493084 kB' 'Inactive: 1349372 kB' 'Active(anon): 130900 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122272 kB' 'Mapped: 48744 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140200 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73452 kB' 'KernelStack: 6288 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.456 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.457 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.458 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7900208 kB' 'MemUsed: 4341764 kB' 'SwapCached: 0 kB' 'Active: 493068 kB' 'Inactive: 1349372 kB' 'Active(anon): 130884 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1722020 kB' 'Mapped: 48744 kB' 'AnonPages: 122252 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66748 kB' 'Slab: 140200 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.459 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.460 node0=1024 expecting 1024 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.460 00:04:25.460 real 0m0.529s 00:04:25.460 user 0m0.279s 00:04:25.460 sys 0m0.284s 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.460 14:21:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.460 ************************************ 00:04:25.460 END TEST even_2G_alloc 00:04:25.460 ************************************ 00:04:25.460 14:21:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:25.460 14:21:04 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:25.460 14:21:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.460 14:21:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.460 14:21:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.460 ************************************ 00:04:25.460 START TEST odd_alloc 00:04:25.460 ************************************ 00:04:25.460 14:21:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:25.460 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:25.460 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:25.460 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:25.460 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.460 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:25.460 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:25.460 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:25.460 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.460 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:25.460 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.461 14:21:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.726 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.726 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.726 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.726 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:25.726 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7900308 kB' 'MemAvailable: 9408076 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493572 kB' 'Inactive: 1349372 kB' 'Active(anon): 131388 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122500 kB' 'Mapped: 48772 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140268 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73520 kB' 'KernelStack: 6276 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.727 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.991 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7900364 kB' 'MemAvailable: 9408132 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493216 kB' 'Inactive: 1349372 kB' 'Active(anon): 131032 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122404 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140268 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73520 kB' 'KernelStack: 6288 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.992 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.993 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7900112 kB' 'MemAvailable: 9407880 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493136 kB' 'Inactive: 1349372 kB' 'Active(anon): 130952 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122352 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140268 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73520 kB' 'KernelStack: 6304 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.994 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.995 nr_hugepages=1025 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:25.995 resv_hugepages=0 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.995 surplus_hugepages=0 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.995 anon_hugepages=0 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7900632 kB' 'MemAvailable: 9408400 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493548 kB' 'Inactive: 1349372 kB' 'Active(anon): 131364 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122340 kB' 'Mapped: 49000 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140264 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73516 kB' 'KernelStack: 6368 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.995 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.996 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7900780 kB' 'MemUsed: 4341192 kB' 'SwapCached: 0 kB' 'Active: 493116 kB' 'Inactive: 1349372 kB' 'Active(anon): 130932 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1722020 kB' 'Mapped: 48740 kB' 'AnonPages: 122124 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66748 kB' 'Slab: 140264 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.997 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:25.998 node0=1025 expecting 1025 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:25.998 00:04:25.998 real 0m0.532s 00:04:25.998 user 0m0.249s 00:04:25.998 sys 0m0.295s 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.998 14:21:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.998 ************************************ 00:04:25.998 END TEST odd_alloc 00:04:25.998 ************************************ 00:04:25.998 14:21:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:25.998 14:21:05 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:25.998 14:21:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.998 14:21:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.998 14:21:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.998 ************************************ 00:04:25.998 START TEST custom_alloc 00:04:25.998 ************************************ 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.998 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.521 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.521 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.521 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8958424 kB' 'MemAvailable: 10466192 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493764 kB' 'Inactive: 1349372 kB' 'Active(anon): 131580 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122472 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140256 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73508 kB' 'KernelStack: 6356 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.522 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8959072 kB' 'MemAvailable: 10466840 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493328 kB' 'Inactive: 1349372 kB' 'Active(anon): 131144 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122476 kB' 'Mapped: 48924 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140260 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73512 kB' 'KernelStack: 6260 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.523 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.524 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8959328 kB' 'MemAvailable: 10467096 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493184 kB' 'Inactive: 1349372 kB' 'Active(anon): 131000 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122164 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140252 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73504 kB' 'KernelStack: 6320 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.525 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.526 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:26.527 nr_hugepages=512 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:26.527 resv_hugepages=0 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.527 surplus_hugepages=0 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.527 anon_hugepages=0 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8959328 kB' 'MemAvailable: 10467096 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493196 kB' 'Inactive: 1349372 kB' 'Active(anon): 131012 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122176 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140248 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73500 kB' 'KernelStack: 6272 kB' 'PageTables: 3980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.527 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.528 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8959764 kB' 'MemUsed: 3282208 kB' 'SwapCached: 0 kB' 'Active: 493108 kB' 'Inactive: 1349372 kB' 'Active(anon): 130924 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1722020 kB' 'Mapped: 48740 kB' 'AnonPages: 122332 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66748 kB' 'Slab: 140244 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.530 node0=512 expecting 512 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:26.530 00:04:26.530 real 0m0.536s 00:04:26.530 user 0m0.282s 00:04:26.530 sys 0m0.287s 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.530 14:21:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:26.530 ************************************ 00:04:26.530 END TEST custom_alloc 00:04:26.530 ************************************ 00:04:26.530 14:21:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:26.530 14:21:06 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:26.530 14:21:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.530 14:21:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.530 14:21:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.530 ************************************ 00:04:26.530 START TEST no_shrink_alloc 00:04:26.530 ************************************ 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.530 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.102 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.102 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.102 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7912356 kB' 'MemAvailable: 9420124 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493772 kB' 'Inactive: 1349372 kB' 'Active(anon): 131588 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122716 kB' 'Mapped: 48988 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140260 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73512 kB' 'KernelStack: 6364 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.102 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.103 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7911856 kB' 'MemAvailable: 9419624 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493328 kB' 'Inactive: 1349372 kB' 'Active(anon): 131144 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122252 kB' 'Mapped: 48640 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140288 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73540 kB' 'KernelStack: 6320 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.104 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7911856 kB' 'MemAvailable: 9419624 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493348 kB' 'Inactive: 1349372 kB' 'Active(anon): 131164 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122272 kB' 'Mapped: 48640 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140288 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73540 kB' 'KernelStack: 6320 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:27.105 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.106 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.107 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.108 nr_hugepages=1024 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:27.108 resv_hugepages=0 00:04:27.108 surplus_hugepages=0 00:04:27.108 anon_hugepages=0 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7911856 kB' 'MemAvailable: 9419624 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493360 kB' 'Inactive: 1349372 kB' 'Active(anon): 131176 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122548 kB' 'Mapped: 48640 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140288 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73540 kB' 'KernelStack: 6320 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.108 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.109 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7920916 kB' 'MemUsed: 4321056 kB' 'SwapCached: 0 kB' 'Active: 493464 kB' 'Inactive: 1349372 kB' 'Active(anon): 131280 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1722020 kB' 'Mapped: 48740 kB' 'AnonPages: 122420 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66748 kB' 'Slab: 140288 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.110 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.111 node0=1024 expecting 1024 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.111 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.370 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.370 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.370 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7920680 kB' 'MemAvailable: 9428448 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493756 kB' 'Inactive: 1349372 kB' 'Active(anon): 131572 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122680 kB' 'Mapped: 48856 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140280 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73532 kB' 'KernelStack: 6356 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.370 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.635 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7920680 kB' 'MemAvailable: 9428448 kB' 'Buffers: 2436 kB' 'Cached: 1719584 kB' 'SwapCached: 0 kB' 'Active: 493288 kB' 'Inactive: 1349372 kB' 'Active(anon): 131104 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349372 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 48864 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140284 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73536 kB' 'KernelStack: 6260 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:27.636 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.637 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7920428 kB' 'MemAvailable: 9428200 kB' 'Buffers: 2436 kB' 'Cached: 1719588 kB' 'SwapCached: 0 kB' 'Active: 493416 kB' 'Inactive: 1349376 kB' 'Active(anon): 131232 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122400 kB' 'Mapped: 48940 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140284 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73536 kB' 'KernelStack: 6352 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.638 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.639 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.640 nr_hugepages=1024 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:27.640 resv_hugepages=0 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.640 surplus_hugepages=0 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.640 anon_hugepages=0 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7923584 kB' 'MemAvailable: 9431356 kB' 'Buffers: 2436 kB' 'Cached: 1719588 kB' 'SwapCached: 0 kB' 'Active: 489364 kB' 'Inactive: 1349376 kB' 'Active(anon): 127180 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118580 kB' 'Mapped: 48220 kB' 'Shmem: 10464 kB' 'KReclaimable: 66748 kB' 'Slab: 140272 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 73524 kB' 'KernelStack: 6272 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 6119424 kB' 'DirectMap1G: 8388608 kB' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.640 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.641 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7923800 kB' 'MemUsed: 4318172 kB' 'SwapCached: 0 kB' 'Active: 489004 kB' 'Inactive: 1349376 kB' 'Active(anon): 126820 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1349376 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1722024 kB' 'Mapped: 48000 kB' 'AnonPages: 117924 kB' 'Shmem: 10464 kB' 'KernelStack: 6176 kB' 'PageTables: 3572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66732 kB' 'Slab: 140216 kB' 'SReclaimable: 66732 kB' 'SUnreclaim: 73484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.642 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.643 node0=1024 expecting 1024 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:27.643 00:04:27.643 real 0m1.015s 00:04:27.643 user 0m0.510s 00:04:27.643 sys 0m0.557s 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.643 14:21:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:27.643 ************************************ 00:04:27.643 END TEST no_shrink_alloc 00:04:27.643 ************************************ 00:04:27.643 14:21:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:27.643 14:21:07 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:27.643 14:21:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:27.643 14:21:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:27.643 14:21:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.643 14:21:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.643 14:21:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:27.643 14:21:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:27.643 14:21:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:27.643 14:21:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:27.643 00:04:27.643 real 0m4.543s 00:04:27.643 user 0m2.204s 00:04:27.643 sys 0m2.422s 00:04:27.643 14:21:07 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.643 14:21:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.643 ************************************ 00:04:27.643 END TEST hugepages 00:04:27.643 ************************************ 00:04:27.643 14:21:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:27.643 14:21:07 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:27.643 14:21:07 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.643 14:21:07 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.643 14:21:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:27.643 ************************************ 00:04:27.644 START TEST driver 00:04:27.644 ************************************ 00:04:27.644 14:21:07 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:27.902 * Looking for test storage... 00:04:27.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:27.902 14:21:07 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:27.902 14:21:07 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.902 14:21:07 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.468 14:21:07 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:28.468 14:21:07 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.468 14:21:07 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.468 14:21:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:28.468 ************************************ 00:04:28.468 START TEST guess_driver 00:04:28.468 ************************************ 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:28.468 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:28.468 Looking for driver=uio_pci_generic 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.468 14:21:07 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.035 14:21:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:29.035 14:21:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:29.035 14:21:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.035 14:21:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.035 14:21:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:29.035 14:21:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.293 14:21:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.293 14:21:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:29.293 14:21:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.293 14:21:08 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:29.293 14:21:08 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:29.293 14:21:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.293 14:21:08 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:29.857 00:04:29.857 real 0m1.414s 00:04:29.857 user 0m0.571s 00:04:29.857 sys 0m0.836s 00:04:29.857 14:21:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.857 14:21:09 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:29.857 ************************************ 00:04:29.857 END TEST guess_driver 00:04:29.857 ************************************ 00:04:29.857 14:21:09 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:29.857 00:04:29.857 real 0m2.079s 00:04:29.857 user 0m0.801s 00:04:29.857 sys 0m1.327s 00:04:29.857 14:21:09 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.857 14:21:09 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:29.857 ************************************ 00:04:29.857 END TEST driver 00:04:29.857 ************************************ 00:04:29.857 14:21:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:29.857 14:21:09 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:29.857 14:21:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.857 14:21:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.857 14:21:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.857 ************************************ 00:04:29.857 START TEST devices 00:04:29.857 ************************************ 00:04:29.857 14:21:09 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:29.857 * Looking for test storage... 00:04:29.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:29.857 14:21:09 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:29.857 14:21:09 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:29.857 14:21:09 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.857 14:21:09 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:30.792 14:21:10 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:30.792 14:21:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:30.792 14:21:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:30.792 No valid GPT data, bailing 00:04:30.792 14:21:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:30.792 14:21:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:30.792 14:21:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:30.792 14:21:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:30.792 14:21:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:30.792 14:21:10 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:30.792 14:21:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:30.792 14:21:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:30.792 No valid GPT data, bailing 00:04:30.792 14:21:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:30.792 14:21:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:30.792 14:21:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:30.792 14:21:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:30.792 14:21:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:30.792 14:21:10 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:30.792 14:21:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:30.793 14:21:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:30.793 14:21:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:30.793 14:21:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:30.793 14:21:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:30.793 14:21:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:30.793 14:21:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:30.793 14:21:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:30.793 No valid GPT data, bailing 00:04:30.793 14:21:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:30.793 14:21:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:30.793 14:21:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:31.052 14:21:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:31.052 14:21:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:31.052 14:21:10 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:31.052 14:21:10 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:31.052 14:21:10 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:31.052 No valid GPT data, bailing 00:04:31.052 14:21:10 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:31.052 14:21:10 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:31.052 14:21:10 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:31.052 14:21:10 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:31.052 14:21:10 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:31.052 14:21:10 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:31.052 14:21:10 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:31.052 14:21:10 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.052 14:21:10 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.052 14:21:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:31.052 ************************************ 00:04:31.052 START TEST nvme_mount 00:04:31.052 ************************************ 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:31.052 14:21:10 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:32.052 Creating new GPT entries in memory. 00:04:32.052 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:32.052 other utilities. 00:04:32.052 14:21:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:32.052 14:21:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.052 14:21:11 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.052 14:21:11 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.052 14:21:11 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:32.988 Creating new GPT entries in memory. 00:04:32.988 The operation has completed successfully. 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59035 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:32.988 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.989 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:33.247 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:33.247 14:21:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.247 14:21:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:33.247 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.247 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:33.247 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:33.247 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.247 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.247 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.506 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.506 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.506 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:33.506 14:21:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.506 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.506 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:33.506 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.506 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.506 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:33.506 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:33.506 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.506 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.506 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.506 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:33.506 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.506 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.506 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.762 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:33.762 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:33.762 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:33.762 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:33.762 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:33.762 14:21:13 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:33.762 14:21:13 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.762 14:21:13 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:33.762 14:21:13 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:33.762 14:21:13 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.762 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:33.762 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:33.762 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:33.762 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.762 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:33.762 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:33.763 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.763 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:33.763 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:33.763 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.763 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:33.763 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.019 14:21:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.019 14:21:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:34.019 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.019 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:34.019 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:34.019 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.019 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.019 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.278 14:21:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:34.536 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.536 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:34.536 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:34.536 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.536 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.536 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:34.794 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:34.794 00:04:34.794 real 0m3.884s 00:04:34.794 user 0m0.676s 00:04:34.794 sys 0m0.963s 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.794 14:21:14 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:34.794 ************************************ 00:04:34.794 END TEST nvme_mount 00:04:34.794 ************************************ 00:04:35.052 14:21:14 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:35.052 14:21:14 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:35.053 14:21:14 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.053 14:21:14 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.053 14:21:14 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.053 ************************************ 00:04:35.053 START TEST dm_mount 00:04:35.053 ************************************ 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:35.053 14:21:14 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:35.988 Creating new GPT entries in memory. 00:04:35.988 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:35.988 other utilities. 00:04:35.989 14:21:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:35.989 14:21:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.989 14:21:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:35.989 14:21:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:35.989 14:21:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:36.922 Creating new GPT entries in memory. 00:04:36.922 The operation has completed successfully. 00:04:36.922 14:21:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:36.922 14:21:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.922 14:21:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:36.922 14:21:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:36.922 14:21:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:38.295 The operation has completed successfully. 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59468 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.295 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.601 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.601 14:21:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.601 14:21:18 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:38.859 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.859 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:38.859 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:38.859 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.859 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.859 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.859 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.859 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.859 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:38.859 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.117 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.117 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.117 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:39.117 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:39.117 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.117 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:39.117 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:39.117 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.117 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:39.117 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.117 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:39.117 14:21:18 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:39.117 00:04:39.117 real 0m4.141s 00:04:39.117 user 0m0.446s 00:04:39.117 sys 0m0.663s 00:04:39.117 14:21:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.117 14:21:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:39.117 ************************************ 00:04:39.117 END TEST dm_mount 00:04:39.117 ************************************ 00:04:39.117 14:21:18 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:39.117 14:21:18 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:39.117 14:21:18 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:39.117 14:21:18 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.117 14:21:18 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.117 14:21:18 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:39.117 14:21:18 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.117 14:21:18 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.375 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:39.375 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:39.375 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:39.375 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:39.375 14:21:18 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:39.375 14:21:18 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.375 14:21:18 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:39.375 14:21:18 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.375 14:21:18 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:39.375 14:21:18 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.375 14:21:18 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:39.375 00:04:39.375 real 0m9.565s 00:04:39.375 user 0m1.766s 00:04:39.375 sys 0m2.230s 00:04:39.375 14:21:18 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.375 14:21:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:39.375 ************************************ 00:04:39.375 END TEST devices 00:04:39.375 ************************************ 00:04:39.375 14:21:18 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:39.375 00:04:39.375 real 0m20.998s 00:04:39.375 user 0m6.785s 00:04:39.375 sys 0m8.690s 00:04:39.375 14:21:18 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.375 14:21:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.375 ************************************ 00:04:39.375 END TEST setup.sh 00:04:39.375 ************************************ 00:04:39.375 14:21:18 -- common/autotest_common.sh@1142 -- # return 0 00:04:39.375 14:21:18 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:40.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.311 Hugepages 00:04:40.311 node hugesize free / total 00:04:40.311 node0 1048576kB 0 / 0 00:04:40.311 node0 2048kB 2048 / 2048 00:04:40.311 00:04:40.311 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:40.311 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:40.311 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:40.311 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:40.311 14:21:19 -- spdk/autotest.sh@130 -- # uname -s 00:04:40.311 14:21:19 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:40.311 14:21:19 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:40.311 14:21:19 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.877 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.135 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.135 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:41.135 14:21:20 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:42.070 14:21:21 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:42.070 14:21:21 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:42.070 14:21:21 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:42.070 14:21:21 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:42.070 14:21:21 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:42.070 14:21:21 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:42.070 14:21:21 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:42.070 14:21:21 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:42.070 14:21:21 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:42.328 14:21:21 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:42.328 14:21:21 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:42.328 14:21:21 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.585 Waiting for block devices as requested 00:04:42.585 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.843 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.843 14:21:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:42.843 14:21:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:42.843 14:21:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:42.843 14:21:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:42.843 14:21:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:42.843 14:21:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:42.843 14:21:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:42.843 14:21:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:42.843 14:21:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:42.843 14:21:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:42.843 14:21:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:42.843 14:21:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:42.843 14:21:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:42.843 14:21:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:42.843 14:21:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:42.843 14:21:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:42.843 14:21:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:42.843 14:21:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:42.843 14:21:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:42.843 14:21:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:42.843 14:21:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:42.843 14:21:22 -- common/autotest_common.sh@1557 -- # continue 00:04:42.843 14:21:22 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:42.843 14:21:22 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:42.843 14:21:22 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:42.843 14:21:22 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:42.843 14:21:22 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:42.843 14:21:22 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:42.843 14:21:22 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:42.843 14:21:22 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:42.843 14:21:22 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:42.843 14:21:22 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:42.843 14:21:22 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:42.843 14:21:22 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:42.843 14:21:22 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:42.843 14:21:22 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:42.843 14:21:22 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:42.843 14:21:22 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:42.843 14:21:22 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:42.843 14:21:22 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:42.843 14:21:22 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:42.843 14:21:22 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:42.843 14:21:22 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:42.843 14:21:22 -- common/autotest_common.sh@1557 -- # continue 00:04:42.843 14:21:22 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:42.843 14:21:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:42.843 14:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:42.843 14:21:22 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:42.843 14:21:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.843 14:21:22 -- common/autotest_common.sh@10 -- # set +x 00:04:42.843 14:21:22 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.408 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.666 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.666 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.666 14:21:23 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:43.666 14:21:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:43.666 14:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:43.666 14:21:23 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:43.667 14:21:23 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:43.667 14:21:23 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:43.667 14:21:23 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:43.667 14:21:23 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:43.667 14:21:23 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:43.667 14:21:23 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:43.667 14:21:23 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:43.667 14:21:23 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.667 14:21:23 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:43.667 14:21:23 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:43.667 14:21:23 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:43.667 14:21:23 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:43.667 14:21:23 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:43.667 14:21:23 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:43.667 14:21:23 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:43.667 14:21:23 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.667 14:21:23 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:43.667 14:21:23 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:43.667 14:21:23 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:43.667 14:21:23 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:43.667 14:21:23 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:43.925 14:21:23 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:43.925 14:21:23 -- common/autotest_common.sh@1593 -- # return 0 00:04:43.925 14:21:23 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:43.925 14:21:23 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:43.925 14:21:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:43.925 14:21:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:43.925 14:21:23 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:43.925 14:21:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.925 14:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:43.925 14:21:23 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:43.925 14:21:23 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:43.925 14:21:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.925 14:21:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.925 14:21:23 -- common/autotest_common.sh@10 -- # set +x 00:04:43.925 ************************************ 00:04:43.925 START TEST env 00:04:43.925 ************************************ 00:04:43.925 14:21:23 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:43.925 * Looking for test storage... 00:04:43.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:43.925 14:21:23 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:43.925 14:21:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.925 14:21:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.925 14:21:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.925 ************************************ 00:04:43.925 START TEST env_memory 00:04:43.925 ************************************ 00:04:43.925 14:21:23 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:43.925 00:04:43.925 00:04:43.925 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.925 http://cunit.sourceforge.net/ 00:04:43.925 00:04:43.925 00:04:43.925 Suite: memory 00:04:43.925 Test: alloc and free memory map ...[2024-07-15 14:21:23.407756] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:43.925 passed 00:04:43.925 Test: mem map translation ...[2024-07-15 14:21:23.433014] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:43.925 [2024-07-15 14:21:23.433083] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:43.925 [2024-07-15 14:21:23.433130] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:43.925 [2024-07-15 14:21:23.433138] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:43.925 passed 00:04:43.925 Test: mem map registration ...[2024-07-15 14:21:23.485390] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:43.925 [2024-07-15 14:21:23.485454] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:43.925 passed 00:04:44.184 Test: mem map adjacent registrations ...passed 00:04:44.184 00:04:44.184 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.184 suites 1 1 n/a 0 0 00:04:44.184 tests 4 4 4 0 0 00:04:44.184 asserts 152 152 152 0 n/a 00:04:44.184 00:04:44.184 Elapsed time = 0.176 seconds 00:04:44.184 00:04:44.184 real 0m0.190s 00:04:44.184 user 0m0.173s 00:04:44.184 sys 0m0.015s 00:04:44.184 14:21:23 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.184 14:21:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:44.184 ************************************ 00:04:44.184 END TEST env_memory 00:04:44.184 ************************************ 00:04:44.184 14:21:23 env -- common/autotest_common.sh@1142 -- # return 0 00:04:44.184 14:21:23 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:44.184 14:21:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.184 14:21:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.184 14:21:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.184 ************************************ 00:04:44.184 START TEST env_vtophys 00:04:44.184 ************************************ 00:04:44.184 14:21:23 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:44.184 EAL: lib.eal log level changed from notice to debug 00:04:44.184 EAL: Detected lcore 0 as core 0 on socket 0 00:04:44.184 EAL: Detected lcore 1 as core 0 on socket 0 00:04:44.185 EAL: Detected lcore 2 as core 0 on socket 0 00:04:44.185 EAL: Detected lcore 3 as core 0 on socket 0 00:04:44.185 EAL: Detected lcore 4 as core 0 on socket 0 00:04:44.185 EAL: Detected lcore 5 as core 0 on socket 0 00:04:44.185 EAL: Detected lcore 6 as core 0 on socket 0 00:04:44.185 EAL: Detected lcore 7 as core 0 on socket 0 00:04:44.185 EAL: Detected lcore 8 as core 0 on socket 0 00:04:44.185 EAL: Detected lcore 9 as core 0 on socket 0 00:04:44.185 EAL: Maximum logical cores by configuration: 128 00:04:44.185 EAL: Detected CPU lcores: 10 00:04:44.185 EAL: Detected NUMA nodes: 1 00:04:44.185 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:44.185 EAL: Detected shared linkage of DPDK 00:04:44.185 EAL: No shared files mode enabled, IPC will be disabled 00:04:44.185 EAL: Selected IOVA mode 'PA' 00:04:44.185 EAL: Probing VFIO support... 00:04:44.185 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:44.185 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:44.185 EAL: Ask a virtual area of 0x2e000 bytes 00:04:44.185 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:44.185 EAL: Setting up physically contiguous memory... 00:04:44.185 EAL: Setting maximum number of open files to 524288 00:04:44.185 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:44.185 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:44.185 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.185 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:44.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.185 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.185 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:44.185 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:44.185 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.185 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:44.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.185 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.185 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:44.185 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:44.185 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.185 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:44.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.185 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.185 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:44.185 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:44.185 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.185 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:44.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.185 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.185 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:44.185 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:44.185 EAL: Hugepages will be freed exactly as allocated. 00:04:44.185 EAL: No shared files mode enabled, IPC is disabled 00:04:44.185 EAL: No shared files mode enabled, IPC is disabled 00:04:44.185 EAL: TSC frequency is ~2200000 KHz 00:04:44.185 EAL: Main lcore 0 is ready (tid=7efeaf45ea00;cpuset=[0]) 00:04:44.185 EAL: Trying to obtain current memory policy. 00:04:44.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.185 EAL: Restoring previous memory policy: 0 00:04:44.185 EAL: request: mp_malloc_sync 00:04:44.185 EAL: No shared files mode enabled, IPC is disabled 00:04:44.185 EAL: Heap on socket 0 was expanded by 2MB 00:04:44.185 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:44.185 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:44.185 EAL: Mem event callback 'spdk:(nil)' registered 00:04:44.185 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:44.185 00:04:44.185 00:04:44.185 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.185 http://cunit.sourceforge.net/ 00:04:44.185 00:04:44.185 00:04:44.185 Suite: components_suite 00:04:44.185 Test: vtophys_malloc_test ...passed 00:04:44.185 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:44.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.185 EAL: Restoring previous memory policy: 4 00:04:44.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.185 EAL: request: mp_malloc_sync 00:04:44.185 EAL: No shared files mode enabled, IPC is disabled 00:04:44.185 EAL: Heap on socket 0 was expanded by 4MB 00:04:44.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.185 EAL: request: mp_malloc_sync 00:04:44.185 EAL: No shared files mode enabled, IPC is disabled 00:04:44.185 EAL: Heap on socket 0 was shrunk by 4MB 00:04:44.185 EAL: Trying to obtain current memory policy. 00:04:44.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.185 EAL: Restoring previous memory policy: 4 00:04:44.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.185 EAL: request: mp_malloc_sync 00:04:44.185 EAL: No shared files mode enabled, IPC is disabled 00:04:44.185 EAL: Heap on socket 0 was expanded by 6MB 00:04:44.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.185 EAL: request: mp_malloc_sync 00:04:44.185 EAL: No shared files mode enabled, IPC is disabled 00:04:44.185 EAL: Heap on socket 0 was shrunk by 6MB 00:04:44.185 EAL: Trying to obtain current memory policy. 00:04:44.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.185 EAL: Restoring previous memory policy: 4 00:04:44.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.185 EAL: request: mp_malloc_sync 00:04:44.185 EAL: No shared files mode enabled, IPC is disabled 00:04:44.185 EAL: Heap on socket 0 was expanded by 10MB 00:04:44.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.185 EAL: request: mp_malloc_sync 00:04:44.185 EAL: No shared files mode enabled, IPC is disabled 00:04:44.185 EAL: Heap on socket 0 was shrunk by 10MB 00:04:44.185 EAL: Trying to obtain current memory policy. 00:04:44.185 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.185 EAL: Restoring previous memory policy: 4 00:04:44.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.185 EAL: request: mp_malloc_sync 00:04:44.185 EAL: No shared files mode enabled, IPC is disabled 00:04:44.185 EAL: Heap on socket 0 was expanded by 18MB 00:04:44.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.444 EAL: request: mp_malloc_sync 00:04:44.444 EAL: No shared files mode enabled, IPC is disabled 00:04:44.444 EAL: Heap on socket 0 was shrunk by 18MB 00:04:44.444 EAL: Trying to obtain current memory policy. 00:04:44.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.444 EAL: Restoring previous memory policy: 4 00:04:44.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.444 EAL: request: mp_malloc_sync 00:04:44.444 EAL: No shared files mode enabled, IPC is disabled 00:04:44.444 EAL: Heap on socket 0 was expanded by 34MB 00:04:44.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.444 EAL: request: mp_malloc_sync 00:04:44.444 EAL: No shared files mode enabled, IPC is disabled 00:04:44.444 EAL: Heap on socket 0 was shrunk by 34MB 00:04:44.444 EAL: Trying to obtain current memory policy. 00:04:44.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.444 EAL: Restoring previous memory policy: 4 00:04:44.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.444 EAL: request: mp_malloc_sync 00:04:44.444 EAL: No shared files mode enabled, IPC is disabled 00:04:44.444 EAL: Heap on socket 0 was expanded by 66MB 00:04:44.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.444 EAL: request: mp_malloc_sync 00:04:44.444 EAL: No shared files mode enabled, IPC is disabled 00:04:44.444 EAL: Heap on socket 0 was shrunk by 66MB 00:04:44.444 EAL: Trying to obtain current memory policy. 00:04:44.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.444 EAL: Restoring previous memory policy: 4 00:04:44.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.444 EAL: request: mp_malloc_sync 00:04:44.444 EAL: No shared files mode enabled, IPC is disabled 00:04:44.444 EAL: Heap on socket 0 was expanded by 130MB 00:04:44.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.444 EAL: request: mp_malloc_sync 00:04:44.444 EAL: No shared files mode enabled, IPC is disabled 00:04:44.444 EAL: Heap on socket 0 was shrunk by 130MB 00:04:44.444 EAL: Trying to obtain current memory policy. 00:04:44.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.444 EAL: Restoring previous memory policy: 4 00:04:44.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.444 EAL: request: mp_malloc_sync 00:04:44.444 EAL: No shared files mode enabled, IPC is disabled 00:04:44.444 EAL: Heap on socket 0 was expanded by 258MB 00:04:44.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.444 EAL: request: mp_malloc_sync 00:04:44.444 EAL: No shared files mode enabled, IPC is disabled 00:04:44.444 EAL: Heap on socket 0 was shrunk by 258MB 00:04:44.444 EAL: Trying to obtain current memory policy. 00:04:44.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.444 EAL: Restoring previous memory policy: 4 00:04:44.444 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.444 EAL: request: mp_malloc_sync 00:04:44.444 EAL: No shared files mode enabled, IPC is disabled 00:04:44.444 EAL: Heap on socket 0 was expanded by 514MB 00:04:44.703 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.703 EAL: request: mp_malloc_sync 00:04:44.703 EAL: No shared files mode enabled, IPC is disabled 00:04:44.703 EAL: Heap on socket 0 was shrunk by 514MB 00:04:44.703 EAL: Trying to obtain current memory policy. 00:04:44.703 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.703 EAL: Restoring previous memory policy: 4 00:04:44.703 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.703 EAL: request: mp_malloc_sync 00:04:44.703 EAL: No shared files mode enabled, IPC is disabled 00:04:44.703 EAL: Heap on socket 0 was expanded by 1026MB 00:04:44.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.961 EAL: request: mp_malloc_sync 00:04:44.961 EAL: No shared files mode enabled, IPC is disabled 00:04:44.961 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:44.961 passed 00:04:44.961 00:04:44.961 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.961 suites 1 1 n/a 0 0 00:04:44.961 tests 2 2 2 0 0 00:04:44.961 asserts 5169 5169 5169 0 n/a 00:04:44.961 00:04:44.961 Elapsed time = 0.696 seconds 00:04:44.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.961 EAL: request: mp_malloc_sync 00:04:44.961 EAL: No shared files mode enabled, IPC is disabled 00:04:44.961 EAL: Heap on socket 0 was shrunk by 2MB 00:04:44.961 EAL: No shared files mode enabled, IPC is disabled 00:04:44.961 EAL: No shared files mode enabled, IPC is disabled 00:04:44.961 EAL: No shared files mode enabled, IPC is disabled 00:04:44.961 00:04:44.961 real 0m0.890s 00:04:44.961 user 0m0.462s 00:04:44.961 sys 0m0.301s 00:04:44.961 14:21:24 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.961 14:21:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:44.961 ************************************ 00:04:44.961 END TEST env_vtophys 00:04:44.961 ************************************ 00:04:44.961 14:21:24 env -- common/autotest_common.sh@1142 -- # return 0 00:04:44.961 14:21:24 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:44.961 14:21:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.961 14:21:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.961 14:21:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.961 ************************************ 00:04:44.961 START TEST env_pci 00:04:44.961 ************************************ 00:04:44.961 14:21:24 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:45.218 00:04:45.218 00:04:45.218 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.218 http://cunit.sourceforge.net/ 00:04:45.218 00:04:45.218 00:04:45.218 Suite: pci 00:04:45.218 Test: pci_hook ...[2024-07-15 14:21:24.558358] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60645 has claimed it 00:04:45.218 passed 00:04:45.218 00:04:45.218 EAL: Cannot find device (10000:00:01.0) 00:04:45.218 EAL: Failed to attach device on primary process 00:04:45.218 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.218 suites 1 1 n/a 0 0 00:04:45.218 tests 1 1 1 0 0 00:04:45.218 asserts 25 25 25 0 n/a 00:04:45.218 00:04:45.218 Elapsed time = 0.002 seconds 00:04:45.218 00:04:45.218 real 0m0.021s 00:04:45.218 user 0m0.009s 00:04:45.218 sys 0m0.012s 00:04:45.218 14:21:24 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.218 ************************************ 00:04:45.218 14:21:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:45.218 END TEST env_pci 00:04:45.218 ************************************ 00:04:45.218 14:21:24 env -- common/autotest_common.sh@1142 -- # return 0 00:04:45.218 14:21:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:45.218 14:21:24 env -- env/env.sh@15 -- # uname 00:04:45.218 14:21:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:45.218 14:21:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:45.218 14:21:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.218 14:21:24 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:45.218 14:21:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.218 14:21:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.218 ************************************ 00:04:45.219 START TEST env_dpdk_post_init 00:04:45.219 ************************************ 00:04:45.219 14:21:24 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:45.219 EAL: Detected CPU lcores: 10 00:04:45.219 EAL: Detected NUMA nodes: 1 00:04:45.219 EAL: Detected shared linkage of DPDK 00:04:45.219 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:45.219 EAL: Selected IOVA mode 'PA' 00:04:45.219 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:45.219 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:45.219 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:45.219 Starting DPDK initialization... 00:04:45.219 Starting SPDK post initialization... 00:04:45.219 SPDK NVMe probe 00:04:45.219 Attaching to 0000:00:10.0 00:04:45.219 Attaching to 0000:00:11.0 00:04:45.219 Attached to 0000:00:10.0 00:04:45.219 Attached to 0000:00:11.0 00:04:45.219 Cleaning up... 00:04:45.219 00:04:45.219 real 0m0.180s 00:04:45.219 user 0m0.045s 00:04:45.219 sys 0m0.035s 00:04:45.219 14:21:24 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.219 14:21:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:45.219 ************************************ 00:04:45.219 END TEST env_dpdk_post_init 00:04:45.219 ************************************ 00:04:45.482 14:21:24 env -- common/autotest_common.sh@1142 -- # return 0 00:04:45.482 14:21:24 env -- env/env.sh@26 -- # uname 00:04:45.482 14:21:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:45.482 14:21:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:45.482 14:21:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.482 14:21:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.482 14:21:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.482 ************************************ 00:04:45.482 START TEST env_mem_callbacks 00:04:45.482 ************************************ 00:04:45.482 14:21:24 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:45.482 EAL: Detected CPU lcores: 10 00:04:45.482 EAL: Detected NUMA nodes: 1 00:04:45.482 EAL: Detected shared linkage of DPDK 00:04:45.482 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:45.482 EAL: Selected IOVA mode 'PA' 00:04:45.482 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:45.482 00:04:45.482 00:04:45.482 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.482 http://cunit.sourceforge.net/ 00:04:45.482 00:04:45.482 00:04:45.482 Suite: memory 00:04:45.482 Test: test ... 00:04:45.482 register 0x200000200000 2097152 00:04:45.483 malloc 3145728 00:04:45.483 register 0x200000400000 4194304 00:04:45.483 buf 0x200000500000 len 3145728 PASSED 00:04:45.483 malloc 64 00:04:45.483 buf 0x2000004fff40 len 64 PASSED 00:04:45.483 malloc 4194304 00:04:45.483 register 0x200000800000 6291456 00:04:45.483 buf 0x200000a00000 len 4194304 PASSED 00:04:45.483 free 0x200000500000 3145728 00:04:45.483 free 0x2000004fff40 64 00:04:45.483 unregister 0x200000400000 4194304 PASSED 00:04:45.483 free 0x200000a00000 4194304 00:04:45.483 unregister 0x200000800000 6291456 PASSED 00:04:45.483 malloc 8388608 00:04:45.483 register 0x200000400000 10485760 00:04:45.483 buf 0x200000600000 len 8388608 PASSED 00:04:45.483 free 0x200000600000 8388608 00:04:45.483 unregister 0x200000400000 10485760 PASSED 00:04:45.483 passed 00:04:45.483 00:04:45.483 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.483 suites 1 1 n/a 0 0 00:04:45.483 tests 1 1 1 0 0 00:04:45.483 asserts 15 15 15 0 n/a 00:04:45.483 00:04:45.483 Elapsed time = 0.005 seconds 00:04:45.483 00:04:45.483 real 0m0.142s 00:04:45.483 user 0m0.017s 00:04:45.483 sys 0m0.025s 00:04:45.483 14:21:24 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.483 14:21:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:45.483 ************************************ 00:04:45.483 END TEST env_mem_callbacks 00:04:45.483 ************************************ 00:04:45.483 14:21:25 env -- common/autotest_common.sh@1142 -- # return 0 00:04:45.483 00:04:45.483 real 0m1.752s 00:04:45.483 user 0m0.828s 00:04:45.483 sys 0m0.581s 00:04:45.483 14:21:25 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.483 14:21:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.483 ************************************ 00:04:45.483 END TEST env 00:04:45.483 ************************************ 00:04:45.483 14:21:25 -- common/autotest_common.sh@1142 -- # return 0 00:04:45.483 14:21:25 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:45.483 14:21:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.483 14:21:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.483 14:21:25 -- common/autotest_common.sh@10 -- # set +x 00:04:45.751 ************************************ 00:04:45.751 START TEST rpc 00:04:45.751 ************************************ 00:04:45.751 14:21:25 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:45.751 * Looking for test storage... 00:04:45.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.751 14:21:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60760 00:04:45.751 14:21:25 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:45.751 14:21:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.751 14:21:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60760 00:04:45.751 14:21:25 rpc -- common/autotest_common.sh@829 -- # '[' -z 60760 ']' 00:04:45.751 14:21:25 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.751 14:21:25 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.751 14:21:25 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.751 14:21:25 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.751 14:21:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.751 [2024-07-15 14:21:25.226916] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:45.751 [2024-07-15 14:21:25.227025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60760 ] 00:04:46.009 [2024-07-15 14:21:25.363063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.009 [2024-07-15 14:21:25.422726] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:46.009 [2024-07-15 14:21:25.422785] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60760' to capture a snapshot of events at runtime. 00:04:46.009 [2024-07-15 14:21:25.422797] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:46.009 [2024-07-15 14:21:25.422806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:46.009 [2024-07-15 14:21:25.422813] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60760 for offline analysis/debug. 00:04:46.009 [2024-07-15 14:21:25.422857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.009 14:21:25 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.009 14:21:25 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:46.009 14:21:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.009 14:21:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:46.009 14:21:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:46.009 14:21:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:46.009 14:21:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.009 14:21:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.009 14:21:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.009 ************************************ 00:04:46.009 START TEST rpc_integrity 00:04:46.009 ************************************ 00:04:46.009 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:46.009 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:46.009 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.009 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.267 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.267 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:46.267 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:46.267 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:46.267 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.267 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.267 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.267 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.267 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:46.267 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:46.267 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.267 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.267 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.267 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:46.267 { 00:04:46.267 "aliases": [ 00:04:46.267 "847d3e6c-b592-4f6c-bb83-bcdedb666fad" 00:04:46.267 ], 00:04:46.267 "assigned_rate_limits": { 00:04:46.267 "r_mbytes_per_sec": 0, 00:04:46.267 "rw_ios_per_sec": 0, 00:04:46.267 "rw_mbytes_per_sec": 0, 00:04:46.267 "w_mbytes_per_sec": 0 00:04:46.267 }, 00:04:46.267 "block_size": 512, 00:04:46.267 "claimed": false, 00:04:46.267 "driver_specific": {}, 00:04:46.267 "memory_domains": [ 00:04:46.267 { 00:04:46.267 "dma_device_id": "system", 00:04:46.267 "dma_device_type": 1 00:04:46.267 }, 00:04:46.267 { 00:04:46.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.267 "dma_device_type": 2 00:04:46.267 } 00:04:46.267 ], 00:04:46.267 "name": "Malloc0", 00:04:46.267 "num_blocks": 16384, 00:04:46.267 "product_name": "Malloc disk", 00:04:46.267 "supported_io_types": { 00:04:46.267 "abort": true, 00:04:46.267 "compare": false, 00:04:46.267 "compare_and_write": false, 00:04:46.267 "copy": true, 00:04:46.267 "flush": true, 00:04:46.267 "get_zone_info": false, 00:04:46.267 "nvme_admin": false, 00:04:46.267 "nvme_io": false, 00:04:46.267 "nvme_io_md": false, 00:04:46.267 "nvme_iov_md": false, 00:04:46.267 "read": true, 00:04:46.267 "reset": true, 00:04:46.267 "seek_data": false, 00:04:46.267 "seek_hole": false, 00:04:46.267 "unmap": true, 00:04:46.267 "write": true, 00:04:46.267 "write_zeroes": true, 00:04:46.267 "zcopy": true, 00:04:46.267 "zone_append": false, 00:04:46.267 "zone_management": false 00:04:46.267 }, 00:04:46.267 "uuid": "847d3e6c-b592-4f6c-bb83-bcdedb666fad", 00:04:46.267 "zoned": false 00:04:46.267 } 00:04:46.267 ]' 00:04:46.267 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:46.267 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:46.267 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:46.267 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.267 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.267 [2024-07-15 14:21:25.734550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:46.267 [2024-07-15 14:21:25.734606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:46.267 [2024-07-15 14:21:25.734626] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b72ad0 00:04:46.267 [2024-07-15 14:21:25.734636] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:46.267 [2024-07-15 14:21:25.736242] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:46.267 [2024-07-15 14:21:25.736279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.267 Passthru0 00:04:46.267 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.267 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.268 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.268 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.268 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.268 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.268 { 00:04:46.268 "aliases": [ 00:04:46.268 "847d3e6c-b592-4f6c-bb83-bcdedb666fad" 00:04:46.268 ], 00:04:46.268 "assigned_rate_limits": { 00:04:46.268 "r_mbytes_per_sec": 0, 00:04:46.268 "rw_ios_per_sec": 0, 00:04:46.268 "rw_mbytes_per_sec": 0, 00:04:46.268 "w_mbytes_per_sec": 0 00:04:46.268 }, 00:04:46.268 "block_size": 512, 00:04:46.268 "claim_type": "exclusive_write", 00:04:46.268 "claimed": true, 00:04:46.268 "driver_specific": {}, 00:04:46.268 "memory_domains": [ 00:04:46.268 { 00:04:46.268 "dma_device_id": "system", 00:04:46.268 "dma_device_type": 1 00:04:46.268 }, 00:04:46.268 { 00:04:46.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.268 "dma_device_type": 2 00:04:46.268 } 00:04:46.268 ], 00:04:46.268 "name": "Malloc0", 00:04:46.268 "num_blocks": 16384, 00:04:46.268 "product_name": "Malloc disk", 00:04:46.268 "supported_io_types": { 00:04:46.268 "abort": true, 00:04:46.268 "compare": false, 00:04:46.268 "compare_and_write": false, 00:04:46.268 "copy": true, 00:04:46.268 "flush": true, 00:04:46.268 "get_zone_info": false, 00:04:46.268 "nvme_admin": false, 00:04:46.268 "nvme_io": false, 00:04:46.268 "nvme_io_md": false, 00:04:46.268 "nvme_iov_md": false, 00:04:46.268 "read": true, 00:04:46.268 "reset": true, 00:04:46.268 "seek_data": false, 00:04:46.268 "seek_hole": false, 00:04:46.268 "unmap": true, 00:04:46.268 "write": true, 00:04:46.268 "write_zeroes": true, 00:04:46.268 "zcopy": true, 00:04:46.268 "zone_append": false, 00:04:46.268 "zone_management": false 00:04:46.268 }, 00:04:46.268 "uuid": "847d3e6c-b592-4f6c-bb83-bcdedb666fad", 00:04:46.268 "zoned": false 00:04:46.268 }, 00:04:46.268 { 00:04:46.268 "aliases": [ 00:04:46.268 "2dbccc06-706f-5a5e-932a-ca76302ba68b" 00:04:46.268 ], 00:04:46.268 "assigned_rate_limits": { 00:04:46.268 "r_mbytes_per_sec": 0, 00:04:46.268 "rw_ios_per_sec": 0, 00:04:46.268 "rw_mbytes_per_sec": 0, 00:04:46.268 "w_mbytes_per_sec": 0 00:04:46.268 }, 00:04:46.268 "block_size": 512, 00:04:46.268 "claimed": false, 00:04:46.268 "driver_specific": { 00:04:46.268 "passthru": { 00:04:46.268 "base_bdev_name": "Malloc0", 00:04:46.268 "name": "Passthru0" 00:04:46.268 } 00:04:46.268 }, 00:04:46.268 "memory_domains": [ 00:04:46.268 { 00:04:46.268 "dma_device_id": "system", 00:04:46.268 "dma_device_type": 1 00:04:46.268 }, 00:04:46.268 { 00:04:46.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.268 "dma_device_type": 2 00:04:46.268 } 00:04:46.268 ], 00:04:46.268 "name": "Passthru0", 00:04:46.268 "num_blocks": 16384, 00:04:46.268 "product_name": "passthru", 00:04:46.268 "supported_io_types": { 00:04:46.268 "abort": true, 00:04:46.268 "compare": false, 00:04:46.268 "compare_and_write": false, 00:04:46.268 "copy": true, 00:04:46.268 "flush": true, 00:04:46.268 "get_zone_info": false, 00:04:46.268 "nvme_admin": false, 00:04:46.268 "nvme_io": false, 00:04:46.268 "nvme_io_md": false, 00:04:46.268 "nvme_iov_md": false, 00:04:46.268 "read": true, 00:04:46.268 "reset": true, 00:04:46.268 "seek_data": false, 00:04:46.268 "seek_hole": false, 00:04:46.268 "unmap": true, 00:04:46.268 "write": true, 00:04:46.268 "write_zeroes": true, 00:04:46.268 "zcopy": true, 00:04:46.268 "zone_append": false, 00:04:46.268 "zone_management": false 00:04:46.268 }, 00:04:46.268 "uuid": "2dbccc06-706f-5a5e-932a-ca76302ba68b", 00:04:46.268 "zoned": false 00:04:46.268 } 00:04:46.268 ]' 00:04:46.268 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.268 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.268 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.268 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.268 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.268 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.268 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:46.268 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.268 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.268 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.268 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.268 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.268 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.268 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.268 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.268 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.527 14:21:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.527 00:04:46.527 real 0m0.294s 00:04:46.527 user 0m0.204s 00:04:46.527 sys 0m0.025s 00:04:46.527 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.527 14:21:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.527 ************************************ 00:04:46.527 END TEST rpc_integrity 00:04:46.527 ************************************ 00:04:46.527 14:21:25 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:46.527 14:21:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:46.527 14:21:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.527 14:21:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.527 14:21:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.527 ************************************ 00:04:46.527 START TEST rpc_plugins 00:04:46.527 ************************************ 00:04:46.527 14:21:25 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:46.527 14:21:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:46.527 14:21:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.527 14:21:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.527 14:21:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.527 14:21:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:46.527 14:21:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:46.527 14:21:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.527 14:21:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.527 14:21:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.527 14:21:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:46.527 { 00:04:46.527 "aliases": [ 00:04:46.527 "d7c1e741-cb3f-4baf-b513-8f547ed598be" 00:04:46.527 ], 00:04:46.527 "assigned_rate_limits": { 00:04:46.527 "r_mbytes_per_sec": 0, 00:04:46.527 "rw_ios_per_sec": 0, 00:04:46.527 "rw_mbytes_per_sec": 0, 00:04:46.527 "w_mbytes_per_sec": 0 00:04:46.527 }, 00:04:46.527 "block_size": 4096, 00:04:46.527 "claimed": false, 00:04:46.527 "driver_specific": {}, 00:04:46.527 "memory_domains": [ 00:04:46.527 { 00:04:46.527 "dma_device_id": "system", 00:04:46.527 "dma_device_type": 1 00:04:46.527 }, 00:04:46.527 { 00:04:46.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.527 "dma_device_type": 2 00:04:46.527 } 00:04:46.527 ], 00:04:46.527 "name": "Malloc1", 00:04:46.527 "num_blocks": 256, 00:04:46.527 "product_name": "Malloc disk", 00:04:46.527 "supported_io_types": { 00:04:46.527 "abort": true, 00:04:46.527 "compare": false, 00:04:46.527 "compare_and_write": false, 00:04:46.527 "copy": true, 00:04:46.527 "flush": true, 00:04:46.527 "get_zone_info": false, 00:04:46.527 "nvme_admin": false, 00:04:46.527 "nvme_io": false, 00:04:46.527 "nvme_io_md": false, 00:04:46.527 "nvme_iov_md": false, 00:04:46.527 "read": true, 00:04:46.527 "reset": true, 00:04:46.527 "seek_data": false, 00:04:46.527 "seek_hole": false, 00:04:46.527 "unmap": true, 00:04:46.527 "write": true, 00:04:46.527 "write_zeroes": true, 00:04:46.527 "zcopy": true, 00:04:46.527 "zone_append": false, 00:04:46.527 "zone_management": false 00:04:46.527 }, 00:04:46.527 "uuid": "d7c1e741-cb3f-4baf-b513-8f547ed598be", 00:04:46.527 "zoned": false 00:04:46.527 } 00:04:46.527 ]' 00:04:46.527 14:21:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:46.527 14:21:26 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:46.527 14:21:26 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:46.527 14:21:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.527 14:21:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.527 14:21:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.527 14:21:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:46.527 14:21:26 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.527 14:21:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.527 14:21:26 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.527 14:21:26 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:46.527 14:21:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:46.527 14:21:26 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:46.527 00:04:46.527 real 0m0.163s 00:04:46.527 user 0m0.101s 00:04:46.527 sys 0m0.024s 00:04:46.527 14:21:26 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.527 14:21:26 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.527 ************************************ 00:04:46.527 END TEST rpc_plugins 00:04:46.527 ************************************ 00:04:46.786 14:21:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:46.786 14:21:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:46.786 14:21:26 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.786 14:21:26 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.786 14:21:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.786 ************************************ 00:04:46.786 START TEST rpc_trace_cmd_test 00:04:46.786 ************************************ 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:46.786 "bdev": { 00:04:46.786 "mask": "0x8", 00:04:46.786 "tpoint_mask": "0xffffffffffffffff" 00:04:46.786 }, 00:04:46.786 "bdev_nvme": { 00:04:46.786 "mask": "0x4000", 00:04:46.786 "tpoint_mask": "0x0" 00:04:46.786 }, 00:04:46.786 "blobfs": { 00:04:46.786 "mask": "0x80", 00:04:46.786 "tpoint_mask": "0x0" 00:04:46.786 }, 00:04:46.786 "dsa": { 00:04:46.786 "mask": "0x200", 00:04:46.786 "tpoint_mask": "0x0" 00:04:46.786 }, 00:04:46.786 "ftl": { 00:04:46.786 "mask": "0x40", 00:04:46.786 "tpoint_mask": "0x0" 00:04:46.786 }, 00:04:46.786 "iaa": { 00:04:46.786 "mask": "0x1000", 00:04:46.786 "tpoint_mask": "0x0" 00:04:46.786 }, 00:04:46.786 "iscsi_conn": { 00:04:46.786 "mask": "0x2", 00:04:46.786 "tpoint_mask": "0x0" 00:04:46.786 }, 00:04:46.786 "nvme_pcie": { 00:04:46.786 "mask": "0x800", 00:04:46.786 "tpoint_mask": "0x0" 00:04:46.786 }, 00:04:46.786 "nvme_tcp": { 00:04:46.786 "mask": "0x2000", 00:04:46.786 "tpoint_mask": "0x0" 00:04:46.786 }, 00:04:46.786 "nvmf_rdma": { 00:04:46.786 "mask": "0x10", 00:04:46.786 "tpoint_mask": "0x0" 00:04:46.786 }, 00:04:46.786 "nvmf_tcp": { 00:04:46.786 "mask": "0x20", 00:04:46.786 "tpoint_mask": "0x0" 00:04:46.786 }, 00:04:46.786 "scsi": { 00:04:46.786 "mask": "0x4", 00:04:46.786 "tpoint_mask": "0x0" 00:04:46.786 }, 00:04:46.786 "sock": { 00:04:46.786 "mask": "0x8000", 00:04:46.786 "tpoint_mask": "0x0" 00:04:46.786 }, 00:04:46.786 "thread": { 00:04:46.786 "mask": "0x400", 00:04:46.786 "tpoint_mask": "0x0" 00:04:46.786 }, 00:04:46.786 "tpoint_group_mask": "0x8", 00:04:46.786 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60760" 00:04:46.786 }' 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:46.786 14:21:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:47.044 14:21:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:47.044 00:04:47.044 real 0m0.269s 00:04:47.044 user 0m0.232s 00:04:47.044 sys 0m0.030s 00:04:47.044 14:21:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.044 14:21:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:47.044 ************************************ 00:04:47.044 END TEST rpc_trace_cmd_test 00:04:47.044 ************************************ 00:04:47.044 14:21:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:47.044 14:21:26 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:47.044 14:21:26 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:47.044 14:21:26 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.044 14:21:26 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.044 14:21:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.044 ************************************ 00:04:47.044 START TEST go_rpc 00:04:47.044 ************************************ 00:04:47.045 14:21:26 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:04:47.045 14:21:26 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:47.045 14:21:26 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:47.045 14:21:26 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:47.045 14:21:26 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:47.045 14:21:26 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:47.045 14:21:26 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.045 14:21:26 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.045 14:21:26 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.045 14:21:26 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:47.045 14:21:26 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:47.045 14:21:26 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["09f863bb-cf2f-4550-ae88-3b7ab752d211"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"09f863bb-cf2f-4550-ae88-3b7ab752d211","zoned":false}]' 00:04:47.045 14:21:26 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:47.045 14:21:26 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:47.045 14:21:26 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:47.045 14:21:26 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.045 14:21:26 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.045 14:21:26 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.045 14:21:26 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:47.304 14:21:26 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:47.304 14:21:26 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:47.304 14:21:26 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:47.304 00:04:47.304 real 0m0.241s 00:04:47.304 user 0m0.165s 00:04:47.304 sys 0m0.040s 00:04:47.304 14:21:26 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.304 ************************************ 00:04:47.304 END TEST go_rpc 00:04:47.304 14:21:26 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.304 ************************************ 00:04:47.304 14:21:26 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:47.304 14:21:26 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:47.304 14:21:26 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:47.304 14:21:26 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.304 14:21:26 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.304 14:21:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.304 ************************************ 00:04:47.304 START TEST rpc_daemon_integrity 00:04:47.304 ************************************ 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:47.304 { 00:04:47.304 "aliases": [ 00:04:47.304 "f1a6163b-019d-4412-bafe-22b823ecd02e" 00:04:47.304 ], 00:04:47.304 "assigned_rate_limits": { 00:04:47.304 "r_mbytes_per_sec": 0, 00:04:47.304 "rw_ios_per_sec": 0, 00:04:47.304 "rw_mbytes_per_sec": 0, 00:04:47.304 "w_mbytes_per_sec": 0 00:04:47.304 }, 00:04:47.304 "block_size": 512, 00:04:47.304 "claimed": false, 00:04:47.304 "driver_specific": {}, 00:04:47.304 "memory_domains": [ 00:04:47.304 { 00:04:47.304 "dma_device_id": "system", 00:04:47.304 "dma_device_type": 1 00:04:47.304 }, 00:04:47.304 { 00:04:47.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.304 "dma_device_type": 2 00:04:47.304 } 00:04:47.304 ], 00:04:47.304 "name": "Malloc3", 00:04:47.304 "num_blocks": 16384, 00:04:47.304 "product_name": "Malloc disk", 00:04:47.304 "supported_io_types": { 00:04:47.304 "abort": true, 00:04:47.304 "compare": false, 00:04:47.304 "compare_and_write": false, 00:04:47.304 "copy": true, 00:04:47.304 "flush": true, 00:04:47.304 "get_zone_info": false, 00:04:47.304 "nvme_admin": false, 00:04:47.304 "nvme_io": false, 00:04:47.304 "nvme_io_md": false, 00:04:47.304 "nvme_iov_md": false, 00:04:47.304 "read": true, 00:04:47.304 "reset": true, 00:04:47.304 "seek_data": false, 00:04:47.304 "seek_hole": false, 00:04:47.304 "unmap": true, 00:04:47.304 "write": true, 00:04:47.304 "write_zeroes": true, 00:04:47.304 "zcopy": true, 00:04:47.304 "zone_append": false, 00:04:47.304 "zone_management": false 00:04:47.304 }, 00:04:47.304 "uuid": "f1a6163b-019d-4412-bafe-22b823ecd02e", 00:04:47.304 "zoned": false 00:04:47.304 } 00:04:47.304 ]' 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:47.304 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:47.305 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:47.305 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.305 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.305 [2024-07-15 14:21:26.879033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:47.305 [2024-07-15 14:21:26.879118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:47.305 [2024-07-15 14:21:26.879150] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d69d70 00:04:47.305 [2024-07-15 14:21:26.879170] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:47.305 [2024-07-15 14:21:26.881124] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:47.305 [2024-07-15 14:21:26.881184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:47.305 Passthru0 00:04:47.305 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.305 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:47.305 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.305 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.565 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.565 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:47.565 { 00:04:47.565 "aliases": [ 00:04:47.565 "f1a6163b-019d-4412-bafe-22b823ecd02e" 00:04:47.565 ], 00:04:47.565 "assigned_rate_limits": { 00:04:47.565 "r_mbytes_per_sec": 0, 00:04:47.565 "rw_ios_per_sec": 0, 00:04:47.565 "rw_mbytes_per_sec": 0, 00:04:47.565 "w_mbytes_per_sec": 0 00:04:47.565 }, 00:04:47.565 "block_size": 512, 00:04:47.565 "claim_type": "exclusive_write", 00:04:47.565 "claimed": true, 00:04:47.565 "driver_specific": {}, 00:04:47.565 "memory_domains": [ 00:04:47.565 { 00:04:47.565 "dma_device_id": "system", 00:04:47.565 "dma_device_type": 1 00:04:47.565 }, 00:04:47.565 { 00:04:47.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.565 "dma_device_type": 2 00:04:47.565 } 00:04:47.565 ], 00:04:47.565 "name": "Malloc3", 00:04:47.565 "num_blocks": 16384, 00:04:47.565 "product_name": "Malloc disk", 00:04:47.565 "supported_io_types": { 00:04:47.565 "abort": true, 00:04:47.565 "compare": false, 00:04:47.565 "compare_and_write": false, 00:04:47.565 "copy": true, 00:04:47.565 "flush": true, 00:04:47.565 "get_zone_info": false, 00:04:47.565 "nvme_admin": false, 00:04:47.565 "nvme_io": false, 00:04:47.565 "nvme_io_md": false, 00:04:47.565 "nvme_iov_md": false, 00:04:47.565 "read": true, 00:04:47.565 "reset": true, 00:04:47.565 "seek_data": false, 00:04:47.565 "seek_hole": false, 00:04:47.565 "unmap": true, 00:04:47.565 "write": true, 00:04:47.565 "write_zeroes": true, 00:04:47.565 "zcopy": true, 00:04:47.565 "zone_append": false, 00:04:47.565 "zone_management": false 00:04:47.565 }, 00:04:47.565 "uuid": "f1a6163b-019d-4412-bafe-22b823ecd02e", 00:04:47.565 "zoned": false 00:04:47.565 }, 00:04:47.565 { 00:04:47.565 "aliases": [ 00:04:47.565 "e5a09b17-a033-5a14-acbb-70b09b06d837" 00:04:47.565 ], 00:04:47.565 "assigned_rate_limits": { 00:04:47.565 "r_mbytes_per_sec": 0, 00:04:47.565 "rw_ios_per_sec": 0, 00:04:47.565 "rw_mbytes_per_sec": 0, 00:04:47.565 "w_mbytes_per_sec": 0 00:04:47.565 }, 00:04:47.565 "block_size": 512, 00:04:47.565 "claimed": false, 00:04:47.565 "driver_specific": { 00:04:47.565 "passthru": { 00:04:47.565 "base_bdev_name": "Malloc3", 00:04:47.565 "name": "Passthru0" 00:04:47.565 } 00:04:47.565 }, 00:04:47.565 "memory_domains": [ 00:04:47.565 { 00:04:47.565 "dma_device_id": "system", 00:04:47.565 "dma_device_type": 1 00:04:47.565 }, 00:04:47.565 { 00:04:47.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:47.565 "dma_device_type": 2 00:04:47.565 } 00:04:47.565 ], 00:04:47.565 "name": "Passthru0", 00:04:47.565 "num_blocks": 16384, 00:04:47.565 "product_name": "passthru", 00:04:47.566 "supported_io_types": { 00:04:47.566 "abort": true, 00:04:47.566 "compare": false, 00:04:47.566 "compare_and_write": false, 00:04:47.566 "copy": true, 00:04:47.566 "flush": true, 00:04:47.566 "get_zone_info": false, 00:04:47.566 "nvme_admin": false, 00:04:47.566 "nvme_io": false, 00:04:47.566 "nvme_io_md": false, 00:04:47.566 "nvme_iov_md": false, 00:04:47.566 "read": true, 00:04:47.566 "reset": true, 00:04:47.566 "seek_data": false, 00:04:47.566 "seek_hole": false, 00:04:47.566 "unmap": true, 00:04:47.566 "write": true, 00:04:47.566 "write_zeroes": true, 00:04:47.566 "zcopy": true, 00:04:47.566 "zone_append": false, 00:04:47.566 "zone_management": false 00:04:47.566 }, 00:04:47.566 "uuid": "e5a09b17-a033-5a14-acbb-70b09b06d837", 00:04:47.566 "zoned": false 00:04:47.566 } 00:04:47.566 ]' 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:47.566 14:21:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:47.566 14:21:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:47.566 00:04:47.566 real 0m0.294s 00:04:47.566 user 0m0.200s 00:04:47.566 sys 0m0.033s 00:04:47.566 14:21:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.566 14:21:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:47.566 ************************************ 00:04:47.566 END TEST rpc_daemon_integrity 00:04:47.566 ************************************ 00:04:47.566 14:21:27 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:47.566 14:21:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:47.566 14:21:27 rpc -- rpc/rpc.sh@84 -- # killprocess 60760 00:04:47.566 14:21:27 rpc -- common/autotest_common.sh@948 -- # '[' -z 60760 ']' 00:04:47.566 14:21:27 rpc -- common/autotest_common.sh@952 -- # kill -0 60760 00:04:47.566 14:21:27 rpc -- common/autotest_common.sh@953 -- # uname 00:04:47.566 14:21:27 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.566 14:21:27 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60760 00:04:47.566 14:21:27 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.566 14:21:27 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.566 killing process with pid 60760 00:04:47.566 14:21:27 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60760' 00:04:47.566 14:21:27 rpc -- common/autotest_common.sh@967 -- # kill 60760 00:04:47.566 14:21:27 rpc -- common/autotest_common.sh@972 -- # wait 60760 00:04:47.825 00:04:47.825 real 0m2.274s 00:04:47.825 user 0m3.218s 00:04:47.825 sys 0m0.562s 00:04:47.825 14:21:27 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.825 14:21:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.825 ************************************ 00:04:47.825 END TEST rpc 00:04:47.825 ************************************ 00:04:47.825 14:21:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:47.825 14:21:27 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:47.825 14:21:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.825 14:21:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.825 14:21:27 -- common/autotest_common.sh@10 -- # set +x 00:04:47.825 ************************************ 00:04:47.825 START TEST skip_rpc 00:04:47.825 ************************************ 00:04:47.825 14:21:27 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:48.084 * Looking for test storage... 00:04:48.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.084 14:21:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.084 14:21:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.084 14:21:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:48.084 14:21:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.084 14:21:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.084 14:21:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.084 ************************************ 00:04:48.084 START TEST skip_rpc 00:04:48.084 ************************************ 00:04:48.084 14:21:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:48.084 14:21:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=61003 00:04:48.084 14:21:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.084 14:21:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:48.084 14:21:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:48.084 [2024-07-15 14:21:27.545381] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:48.084 [2024-07-15 14:21:27.545466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61003 ] 00:04:48.342 [2024-07-15 14:21:27.675991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.342 [2024-07-15 14:21:27.764788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.599 14:21:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:53.599 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:53.599 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:53.599 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:53.599 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.599 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:53.599 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:53.599 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:53.599 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.599 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.599 2024/07/15 14:21:32 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:53.599 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 61003 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 61003 ']' 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 61003 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61003 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.600 killing process with pid 61003 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61003' 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 61003 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 61003 00:04:53.600 00:04:53.600 real 0m5.294s 00:04:53.600 user 0m4.998s 00:04:53.600 sys 0m0.190s 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.600 ************************************ 00:04:53.600 END TEST skip_rpc 00:04:53.600 ************************************ 00:04:53.600 14:21:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.600 14:21:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:53.600 14:21:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:53.600 14:21:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.600 14:21:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.600 14:21:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.600 ************************************ 00:04:53.600 START TEST skip_rpc_with_json 00:04:53.600 ************************************ 00:04:53.600 14:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:53.600 14:21:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:53.600 14:21:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61090 00:04:53.600 14:21:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.600 14:21:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61090 00:04:53.600 14:21:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.600 14:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 61090 ']' 00:04:53.600 14:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.600 14:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.600 14:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.600 14:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.600 14:21:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:53.600 [2024-07-15 14:21:32.893950] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:04:53.600 [2024-07-15 14:21:32.894042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61090 ] 00:04:53.600 [2024-07-15 14:21:33.034717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.600 [2024-07-15 14:21:33.105236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.534 14:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:54.534 14:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:54.534 14:21:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:54.534 14:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.534 14:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.534 [2024-07-15 14:21:33.926369] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:54.534 2024/07/15 14:21:33 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:54.534 request: 00:04:54.534 { 00:04:54.534 "method": "nvmf_get_transports", 00:04:54.534 "params": { 00:04:54.534 "trtype": "tcp" 00:04:54.534 } 00:04:54.534 } 00:04:54.534 Got JSON-RPC error response 00:04:54.534 GoRPCClient: error on JSON-RPC call 00:04:54.534 14:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:54.534 14:21:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:54.534 14:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.534 14:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.534 [2024-07-15 14:21:33.938455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.534 14:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.534 14:21:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:54.534 14:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.534 14:21:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.534 14:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.534 14:21:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.534 { 00:04:54.534 "subsystems": [ 00:04:54.534 { 00:04:54.534 "subsystem": "keyring", 00:04:54.534 "config": [] 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "subsystem": "iobuf", 00:04:54.534 "config": [ 00:04:54.534 { 00:04:54.534 "method": "iobuf_set_options", 00:04:54.534 "params": { 00:04:54.534 "large_bufsize": 135168, 00:04:54.534 "large_pool_count": 1024, 00:04:54.534 "small_bufsize": 8192, 00:04:54.534 "small_pool_count": 8192 00:04:54.534 } 00:04:54.534 } 00:04:54.534 ] 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "subsystem": "sock", 00:04:54.534 "config": [ 00:04:54.534 { 00:04:54.534 "method": "sock_set_default_impl", 00:04:54.534 "params": { 00:04:54.534 "impl_name": "posix" 00:04:54.534 } 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "method": "sock_impl_set_options", 00:04:54.534 "params": { 00:04:54.534 "enable_ktls": false, 00:04:54.534 "enable_placement_id": 0, 00:04:54.534 "enable_quickack": false, 00:04:54.534 "enable_recv_pipe": true, 00:04:54.534 "enable_zerocopy_send_client": false, 00:04:54.534 "enable_zerocopy_send_server": true, 00:04:54.534 "impl_name": "ssl", 00:04:54.534 "recv_buf_size": 4096, 00:04:54.534 "send_buf_size": 4096, 00:04:54.534 "tls_version": 0, 00:04:54.534 "zerocopy_threshold": 0 00:04:54.534 } 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "method": "sock_impl_set_options", 00:04:54.534 "params": { 00:04:54.534 "enable_ktls": false, 00:04:54.534 "enable_placement_id": 0, 00:04:54.534 "enable_quickack": false, 00:04:54.534 "enable_recv_pipe": true, 00:04:54.534 "enable_zerocopy_send_client": false, 00:04:54.534 "enable_zerocopy_send_server": true, 00:04:54.534 "impl_name": "posix", 00:04:54.534 "recv_buf_size": 2097152, 00:04:54.534 "send_buf_size": 2097152, 00:04:54.534 "tls_version": 0, 00:04:54.534 "zerocopy_threshold": 0 00:04:54.534 } 00:04:54.534 } 00:04:54.534 ] 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "subsystem": "vmd", 00:04:54.534 "config": [] 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "subsystem": "accel", 00:04:54.534 "config": [ 00:04:54.534 { 00:04:54.534 "method": "accel_set_options", 00:04:54.534 "params": { 00:04:54.534 "buf_count": 2048, 00:04:54.534 "large_cache_size": 16, 00:04:54.534 "sequence_count": 2048, 00:04:54.534 "small_cache_size": 128, 00:04:54.534 "task_count": 2048 00:04:54.534 } 00:04:54.534 } 00:04:54.534 ] 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "subsystem": "bdev", 00:04:54.534 "config": [ 00:04:54.534 { 00:04:54.534 "method": "bdev_set_options", 00:04:54.534 "params": { 00:04:54.534 "bdev_auto_examine": true, 00:04:54.534 "bdev_io_cache_size": 256, 00:04:54.534 "bdev_io_pool_size": 65535, 00:04:54.534 "iobuf_large_cache_size": 16, 00:04:54.534 "iobuf_small_cache_size": 128 00:04:54.534 } 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "method": "bdev_raid_set_options", 00:04:54.534 "params": { 00:04:54.534 "process_window_size_kb": 1024 00:04:54.534 } 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "method": "bdev_iscsi_set_options", 00:04:54.534 "params": { 00:04:54.534 "timeout_sec": 30 00:04:54.534 } 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "method": "bdev_nvme_set_options", 00:04:54.534 "params": { 00:04:54.534 "action_on_timeout": "none", 00:04:54.534 "allow_accel_sequence": false, 00:04:54.534 "arbitration_burst": 0, 00:04:54.534 "bdev_retry_count": 3, 00:04:54.534 "ctrlr_loss_timeout_sec": 0, 00:04:54.534 "delay_cmd_submit": true, 00:04:54.534 "dhchap_dhgroups": [ 00:04:54.534 "null", 00:04:54.534 "ffdhe2048", 00:04:54.534 "ffdhe3072", 00:04:54.534 "ffdhe4096", 00:04:54.534 "ffdhe6144", 00:04:54.534 "ffdhe8192" 00:04:54.534 ], 00:04:54.534 "dhchap_digests": [ 00:04:54.534 "sha256", 00:04:54.534 "sha384", 00:04:54.534 "sha512" 00:04:54.534 ], 00:04:54.534 "disable_auto_failback": false, 00:04:54.534 "fast_io_fail_timeout_sec": 0, 00:04:54.534 "generate_uuids": false, 00:04:54.534 "high_priority_weight": 0, 00:04:54.534 "io_path_stat": false, 00:04:54.534 "io_queue_requests": 0, 00:04:54.534 "keep_alive_timeout_ms": 10000, 00:04:54.534 "low_priority_weight": 0, 00:04:54.534 "medium_priority_weight": 0, 00:04:54.534 "nvme_adminq_poll_period_us": 10000, 00:04:54.534 "nvme_error_stat": false, 00:04:54.534 "nvme_ioq_poll_period_us": 0, 00:04:54.534 "rdma_cm_event_timeout_ms": 0, 00:04:54.534 "rdma_max_cq_size": 0, 00:04:54.534 "rdma_srq_size": 0, 00:04:54.534 "reconnect_delay_sec": 0, 00:04:54.534 "timeout_admin_us": 0, 00:04:54.534 "timeout_us": 0, 00:04:54.534 "transport_ack_timeout": 0, 00:04:54.534 "transport_retry_count": 4, 00:04:54.534 "transport_tos": 0 00:04:54.534 } 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "method": "bdev_nvme_set_hotplug", 00:04:54.534 "params": { 00:04:54.534 "enable": false, 00:04:54.534 "period_us": 100000 00:04:54.534 } 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "method": "bdev_wait_for_examine" 00:04:54.534 } 00:04:54.534 ] 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "subsystem": "scsi", 00:04:54.534 "config": null 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "subsystem": "scheduler", 00:04:54.534 "config": [ 00:04:54.534 { 00:04:54.534 "method": "framework_set_scheduler", 00:04:54.534 "params": { 00:04:54.534 "name": "static" 00:04:54.534 } 00:04:54.534 } 00:04:54.534 ] 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "subsystem": "vhost_scsi", 00:04:54.534 "config": [] 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "subsystem": "vhost_blk", 00:04:54.534 "config": [] 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "subsystem": "ublk", 00:04:54.534 "config": [] 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "subsystem": "nbd", 00:04:54.534 "config": [] 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "subsystem": "nvmf", 00:04:54.534 "config": [ 00:04:54.534 { 00:04:54.534 "method": "nvmf_set_config", 00:04:54.534 "params": { 00:04:54.534 "admin_cmd_passthru": { 00:04:54.534 "identify_ctrlr": false 00:04:54.534 }, 00:04:54.534 "discovery_filter": "match_any" 00:04:54.534 } 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "method": "nvmf_set_max_subsystems", 00:04:54.534 "params": { 00:04:54.534 "max_subsystems": 1024 00:04:54.534 } 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "method": "nvmf_set_crdt", 00:04:54.534 "params": { 00:04:54.534 "crdt1": 0, 00:04:54.534 "crdt2": 0, 00:04:54.534 "crdt3": 0 00:04:54.534 } 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "method": "nvmf_create_transport", 00:04:54.534 "params": { 00:04:54.534 "abort_timeout_sec": 1, 00:04:54.534 "ack_timeout": 0, 00:04:54.534 "buf_cache_size": 4294967295, 00:04:54.534 "c2h_success": true, 00:04:54.534 "data_wr_pool_size": 0, 00:04:54.534 "dif_insert_or_strip": false, 00:04:54.534 "in_capsule_data_size": 4096, 00:04:54.534 "io_unit_size": 131072, 00:04:54.534 "max_aq_depth": 128, 00:04:54.534 "max_io_qpairs_per_ctrlr": 127, 00:04:54.534 "max_io_size": 131072, 00:04:54.534 "max_queue_depth": 128, 00:04:54.534 "num_shared_buffers": 511, 00:04:54.534 "sock_priority": 0, 00:04:54.534 "trtype": "TCP", 00:04:54.534 "zcopy": false 00:04:54.535 } 00:04:54.535 } 00:04:54.535 ] 00:04:54.535 }, 00:04:54.535 { 00:04:54.535 "subsystem": "iscsi", 00:04:54.535 "config": [ 00:04:54.535 { 00:04:54.535 "method": "iscsi_set_options", 00:04:54.535 "params": { 00:04:54.535 "allow_duplicated_isid": false, 00:04:54.535 "chap_group": 0, 00:04:54.535 "data_out_pool_size": 2048, 00:04:54.535 "default_time2retain": 20, 00:04:54.535 "default_time2wait": 2, 00:04:54.535 "disable_chap": false, 00:04:54.535 "error_recovery_level": 0, 00:04:54.535 "first_burst_length": 8192, 00:04:54.535 "immediate_data": true, 00:04:54.535 "immediate_data_pool_size": 16384, 00:04:54.535 "max_connections_per_session": 2, 00:04:54.535 "max_large_datain_per_connection": 64, 00:04:54.535 "max_queue_depth": 64, 00:04:54.535 "max_r2t_per_connection": 4, 00:04:54.535 "max_sessions": 128, 00:04:54.535 "mutual_chap": false, 00:04:54.535 "node_base": "iqn.2016-06.io.spdk", 00:04:54.535 "nop_in_interval": 30, 00:04:54.535 "nop_timeout": 60, 00:04:54.535 "pdu_pool_size": 36864, 00:04:54.535 "require_chap": false 00:04:54.535 } 00:04:54.535 } 00:04:54.535 ] 00:04:54.535 } 00:04:54.535 ] 00:04:54.535 } 00:04:54.535 14:21:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:54.535 14:21:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61090 00:04:54.535 14:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61090 ']' 00:04:54.535 14:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61090 00:04:54.535 14:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:54.535 14:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:54.535 14:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61090 00:04:54.793 14:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:54.794 14:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:54.794 killing process with pid 61090 00:04:54.794 14:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61090' 00:04:54.794 14:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61090 00:04:54.794 14:21:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61090 00:04:54.794 14:21:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61135 00:04:54.794 14:21:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:54.794 14:21:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.054 14:21:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61135 00:05:00.054 14:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61135 ']' 00:05:00.054 14:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61135 00:05:00.054 14:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:00.054 14:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.054 14:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61135 00:05:00.054 14:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.054 14:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.054 14:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61135' 00:05:00.054 killing process with pid 61135 00:05:00.054 14:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61135 00:05:00.054 14:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61135 00:05:00.312 14:21:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.313 00:05:00.313 real 0m6.826s 00:05:00.313 user 0m6.777s 00:05:00.313 sys 0m0.479s 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.313 ************************************ 00:05:00.313 END TEST skip_rpc_with_json 00:05:00.313 ************************************ 00:05:00.313 14:21:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:00.313 14:21:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:00.313 14:21:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.313 14:21:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.313 14:21:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.313 ************************************ 00:05:00.313 START TEST skip_rpc_with_delay 00:05:00.313 ************************************ 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:00.313 [2024-07-15 14:21:39.770595] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:00.313 [2024-07-15 14:21:39.770760] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:00.313 00:05:00.313 real 0m0.088s 00:05:00.313 user 0m0.058s 00:05:00.313 sys 0m0.029s 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.313 14:21:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:00.313 ************************************ 00:05:00.313 END TEST skip_rpc_with_delay 00:05:00.313 ************************************ 00:05:00.313 14:21:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:00.313 14:21:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:00.313 14:21:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:00.313 14:21:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:00.313 14:21:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.313 14:21:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.313 14:21:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.313 ************************************ 00:05:00.313 START TEST exit_on_failed_rpc_init 00:05:00.313 ************************************ 00:05:00.313 14:21:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:00.313 14:21:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61239 00:05:00.313 14:21:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61239 00:05:00.313 14:21:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.313 14:21:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61239 ']' 00:05:00.313 14:21:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.313 14:21:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.313 14:21:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.313 14:21:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.313 14:21:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.313 [2024-07-15 14:21:39.900979] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:00.313 [2024-07-15 14:21:39.901082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61239 ] 00:05:00.571 [2024-07-15 14:21:40.033846] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.571 [2024-07-15 14:21:40.092130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.910 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.910 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:00.910 14:21:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.910 14:21:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.910 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:00.910 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.910 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.910 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.910 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.910 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.910 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.910 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:00.911 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.911 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:00.911 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.911 [2024-07-15 14:21:40.322822] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:00.911 [2024-07-15 14:21:40.322930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61261 ] 00:05:00.911 [2024-07-15 14:21:40.462948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.174 [2024-07-15 14:21:40.532154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.174 [2024-07-15 14:21:40.532253] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:01.174 [2024-07-15 14:21:40.532270] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:01.174 [2024-07-15 14:21:40.532281] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61239 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61239 ']' 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61239 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61239 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.175 killing process with pid 61239 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61239' 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61239 00:05:01.175 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61239 00:05:01.433 00:05:01.433 real 0m1.054s 00:05:01.433 user 0m1.240s 00:05:01.433 sys 0m0.283s 00:05:01.433 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.433 14:21:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.433 ************************************ 00:05:01.433 END TEST exit_on_failed_rpc_init 00:05:01.433 ************************************ 00:05:01.433 14:21:40 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:01.433 14:21:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:01.433 00:05:01.433 real 0m13.534s 00:05:01.433 user 0m13.172s 00:05:01.433 sys 0m1.140s 00:05:01.433 14:21:40 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.433 14:21:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.433 ************************************ 00:05:01.433 END TEST skip_rpc 00:05:01.433 ************************************ 00:05:01.433 14:21:40 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.433 14:21:40 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:01.433 14:21:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.433 14:21:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.433 14:21:40 -- common/autotest_common.sh@10 -- # set +x 00:05:01.433 ************************************ 00:05:01.433 START TEST rpc_client 00:05:01.433 ************************************ 00:05:01.433 14:21:40 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:01.693 * Looking for test storage... 00:05:01.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:01.693 14:21:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:01.693 OK 00:05:01.693 14:21:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:01.693 00:05:01.693 real 0m0.100s 00:05:01.693 user 0m0.040s 00:05:01.693 sys 0m0.067s 00:05:01.693 14:21:41 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.693 14:21:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:01.693 ************************************ 00:05:01.693 END TEST rpc_client 00:05:01.693 ************************************ 00:05:01.693 14:21:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:01.693 14:21:41 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:01.693 14:21:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.693 14:21:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.693 14:21:41 -- common/autotest_common.sh@10 -- # set +x 00:05:01.693 ************************************ 00:05:01.693 START TEST json_config 00:05:01.693 ************************************ 00:05:01.693 14:21:41 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:01.693 14:21:41 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.693 14:21:41 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.693 14:21:41 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.693 14:21:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.693 14:21:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.693 14:21:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.693 14:21:41 json_config -- paths/export.sh@5 -- # export PATH 00:05:01.693 14:21:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@47 -- # : 0 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:01.693 14:21:41 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:01.693 INFO: JSON configuration test init 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:01.693 14:21:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.693 14:21:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:01.693 14:21:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.693 14:21:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.693 14:21:41 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:01.693 14:21:41 json_config -- json_config/common.sh@9 -- # local app=target 00:05:01.693 14:21:41 json_config -- json_config/common.sh@10 -- # shift 00:05:01.693 14:21:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:01.693 14:21:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:01.693 14:21:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:01.693 14:21:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.693 14:21:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:01.693 14:21:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61379 00:05:01.693 Waiting for target to run... 00:05:01.693 14:21:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:01.693 14:21:41 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:01.693 14:21:41 json_config -- json_config/common.sh@25 -- # waitforlisten 61379 /var/tmp/spdk_tgt.sock 00:05:01.693 14:21:41 json_config -- common/autotest_common.sh@829 -- # '[' -z 61379 ']' 00:05:01.693 14:21:41 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:01.693 14:21:41 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:01.693 14:21:41 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:01.693 14:21:41 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.694 14:21:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.694 [2024-07-15 14:21:41.272262] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:01.694 [2024-07-15 14:21:41.272348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61379 ] 00:05:02.261 [2024-07-15 14:21:41.555691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.261 [2024-07-15 14:21:41.611971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.828 14:21:42 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.828 14:21:42 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:02.828 00:05:02.828 14:21:42 json_config -- json_config/common.sh@26 -- # echo '' 00:05:02.828 14:21:42 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:02.828 14:21:42 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:02.828 14:21:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.828 14:21:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.828 14:21:42 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:02.828 14:21:42 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:02.828 14:21:42 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.828 14:21:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.828 14:21:42 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:02.828 14:21:42 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:02.828 14:21:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:03.395 14:21:42 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:03.395 14:21:42 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:03.395 14:21:42 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.395 14:21:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.395 14:21:42 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:03.395 14:21:42 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:03.395 14:21:42 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:03.395 14:21:42 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:03.395 14:21:42 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:03.395 14:21:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:03.654 14:21:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.654 14:21:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:03.654 14:21:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.654 14:21:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:03.654 14:21:43 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:03.654 14:21:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:03.913 MallocForNvmf0 00:05:03.913 14:21:43 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:03.913 14:21:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:04.171 MallocForNvmf1 00:05:04.171 14:21:43 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:04.171 14:21:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:04.429 [2024-07-15 14:21:43.975183] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.429 14:21:43 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:04.429 14:21:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:04.687 14:21:44 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:04.687 14:21:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:04.944 14:21:44 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:04.944 14:21:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:05.202 14:21:44 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:05.202 14:21:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:05.768 [2024-07-15 14:21:45.063828] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:05.768 14:21:45 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:05.768 14:21:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.768 14:21:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.768 14:21:45 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:05.768 14:21:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:05.768 14:21:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.768 14:21:45 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:05.768 14:21:45 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:05.768 14:21:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:06.026 MallocBdevForConfigChangeCheck 00:05:06.026 14:21:45 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:06.026 14:21:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:06.026 14:21:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.027 14:21:45 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:06.027 14:21:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.593 INFO: shutting down applications... 00:05:06.593 14:21:45 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:06.593 14:21:45 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:06.593 14:21:45 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:06.593 14:21:45 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:06.593 14:21:45 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:06.851 Calling clear_iscsi_subsystem 00:05:06.851 Calling clear_nvmf_subsystem 00:05:06.851 Calling clear_nbd_subsystem 00:05:06.851 Calling clear_ublk_subsystem 00:05:06.851 Calling clear_vhost_blk_subsystem 00:05:06.851 Calling clear_vhost_scsi_subsystem 00:05:06.851 Calling clear_bdev_subsystem 00:05:06.851 14:21:46 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:06.851 14:21:46 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:06.851 14:21:46 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:06.851 14:21:46 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.851 14:21:46 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:06.851 14:21:46 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:07.110 14:21:46 json_config -- json_config/json_config.sh@345 -- # break 00:05:07.110 14:21:46 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:07.110 14:21:46 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:07.110 14:21:46 json_config -- json_config/common.sh@31 -- # local app=target 00:05:07.110 14:21:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:07.110 14:21:46 json_config -- json_config/common.sh@35 -- # [[ -n 61379 ]] 00:05:07.110 14:21:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61379 00:05:07.110 14:21:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:07.110 14:21:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.110 14:21:46 json_config -- json_config/common.sh@41 -- # kill -0 61379 00:05:07.110 14:21:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.676 14:21:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.676 14:21:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.676 14:21:47 json_config -- json_config/common.sh@41 -- # kill -0 61379 00:05:07.676 14:21:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.676 14:21:47 json_config -- json_config/common.sh@43 -- # break 00:05:07.676 14:21:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.676 SPDK target shutdown done 00:05:07.676 14:21:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.676 INFO: relaunching applications... 00:05:07.676 14:21:47 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:07.676 14:21:47 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:07.676 14:21:47 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.676 14:21:47 json_config -- json_config/common.sh@10 -- # shift 00:05:07.676 14:21:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.676 14:21:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.676 14:21:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.676 14:21:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.676 14:21:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.676 14:21:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61649 00:05:07.676 14:21:47 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:07.676 Waiting for target to run... 00:05:07.676 14:21:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.676 14:21:47 json_config -- json_config/common.sh@25 -- # waitforlisten 61649 /var/tmp/spdk_tgt.sock 00:05:07.676 14:21:47 json_config -- common/autotest_common.sh@829 -- # '[' -z 61649 ']' 00:05:07.676 14:21:47 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.676 14:21:47 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.676 14:21:47 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.676 14:21:47 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.676 14:21:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.676 [2024-07-15 14:21:47.226806] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:07.676 [2024-07-15 14:21:47.226903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61649 ] 00:05:07.934 [2024-07-15 14:21:47.520470] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.193 [2024-07-15 14:21:47.575949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.451 [2024-07-15 14:21:47.893712] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.451 [2024-07-15 14:21:47.925761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:08.709 14:21:48 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.709 14:21:48 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:08.709 00:05:08.709 14:21:48 json_config -- json_config/common.sh@26 -- # echo '' 00:05:08.709 14:21:48 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:08.709 14:21:48 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:08.709 INFO: Checking if target configuration is the same... 00:05:08.709 14:21:48 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:08.709 14:21:48 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:08.709 14:21:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.709 + '[' 2 -ne 2 ']' 00:05:08.709 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:08.709 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:08.709 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:08.709 +++ basename /dev/fd/62 00:05:08.709 ++ mktemp /tmp/62.XXX 00:05:08.709 + tmp_file_1=/tmp/62.0m7 00:05:08.709 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:08.709 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:08.709 + tmp_file_2=/tmp/spdk_tgt_config.json.j0B 00:05:08.709 + ret=0 00:05:08.709 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:09.277 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:09.277 + diff -u /tmp/62.0m7 /tmp/spdk_tgt_config.json.j0B 00:05:09.277 INFO: JSON config files are the same 00:05:09.277 + echo 'INFO: JSON config files are the same' 00:05:09.277 + rm /tmp/62.0m7 /tmp/spdk_tgt_config.json.j0B 00:05:09.277 + exit 0 00:05:09.277 14:21:48 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:09.277 14:21:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:09.277 INFO: changing configuration and checking if this can be detected... 00:05:09.277 14:21:48 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.277 14:21:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.535 14:21:48 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:09.535 14:21:48 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.535 14:21:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.535 + '[' 2 -ne 2 ']' 00:05:09.535 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:09.535 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:09.535 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:09.535 +++ basename /dev/fd/62 00:05:09.535 ++ mktemp /tmp/62.XXX 00:05:09.535 + tmp_file_1=/tmp/62.7Bl 00:05:09.535 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:09.535 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.535 + tmp_file_2=/tmp/spdk_tgt_config.json.zTl 00:05:09.535 + ret=0 00:05:09.535 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:10.101 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:10.101 + diff -u /tmp/62.7Bl /tmp/spdk_tgt_config.json.zTl 00:05:10.101 + ret=1 00:05:10.101 + echo '=== Start of file: /tmp/62.7Bl ===' 00:05:10.101 + cat /tmp/62.7Bl 00:05:10.101 + echo '=== End of file: /tmp/62.7Bl ===' 00:05:10.101 + echo '' 00:05:10.101 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zTl ===' 00:05:10.101 + cat /tmp/spdk_tgt_config.json.zTl 00:05:10.101 + echo '=== End of file: /tmp/spdk_tgt_config.json.zTl ===' 00:05:10.101 + echo '' 00:05:10.101 + rm /tmp/62.7Bl /tmp/spdk_tgt_config.json.zTl 00:05:10.101 + exit 1 00:05:10.101 INFO: configuration change detected. 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@317 -- # [[ -n 61649 ]] 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.101 14:21:49 json_config -- json_config/json_config.sh@323 -- # killprocess 61649 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@948 -- # '[' -z 61649 ']' 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@952 -- # kill -0 61649 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@953 -- # uname 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61649 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61649' 00:05:10.101 killing process with pid 61649 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@967 -- # kill 61649 00:05:10.101 14:21:49 json_config -- common/autotest_common.sh@972 -- # wait 61649 00:05:10.360 14:21:49 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:10.360 14:21:49 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:10.360 14:21:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.360 14:21:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.360 14:21:49 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:10.360 INFO: Success 00:05:10.360 14:21:49 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:10.360 00:05:10.360 real 0m8.646s 00:05:10.360 user 0m12.858s 00:05:10.360 sys 0m1.536s 00:05:10.360 14:21:49 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.360 ************************************ 00:05:10.360 END TEST json_config 00:05:10.360 ************************************ 00:05:10.360 14:21:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.360 14:21:49 -- common/autotest_common.sh@1142 -- # return 0 00:05:10.360 14:21:49 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:10.360 14:21:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.360 14:21:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.360 14:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:10.360 ************************************ 00:05:10.360 START TEST json_config_extra_key 00:05:10.360 ************************************ 00:05:10.360 14:21:49 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:10.360 14:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:10.360 14:21:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:10.360 14:21:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.360 14:21:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.360 14:21:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.360 14:21:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.360 14:21:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.360 14:21:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.360 14:21:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:10.361 14:21:49 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.361 14:21:49 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.361 14:21:49 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.361 14:21:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.361 14:21:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.361 14:21:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.361 14:21:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:10.361 14:21:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:10.361 14:21:49 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:10.361 14:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:10.361 14:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:10.361 14:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:10.361 14:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:10.361 14:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:10.361 14:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:10.361 14:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:10.361 14:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:10.361 14:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:10.361 14:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:10.361 INFO: launching applications... 00:05:10.361 14:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:10.361 14:21:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:10.361 14:21:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:10.361 14:21:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:10.361 14:21:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.361 14:21:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.361 14:21:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.361 14:21:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.361 14:21:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.361 14:21:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61825 00:05:10.361 14:21:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.361 Waiting for target to run... 00:05:10.361 14:21:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61825 /var/tmp/spdk_tgt.sock 00:05:10.361 14:21:49 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61825 ']' 00:05:10.361 14:21:49 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:10.361 14:21:49 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.361 14:21:49 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.361 14:21:49 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.361 14:21:49 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.361 14:21:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:10.619 [2024-07-15 14:21:49.961889] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:10.619 [2024-07-15 14:21:49.961989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61825 ] 00:05:10.877 [2024-07-15 14:21:50.253540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.877 [2024-07-15 14:21:50.308195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.444 14:21:50 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.444 14:21:50 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:11.444 00:05:11.444 14:21:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:11.444 INFO: shutting down applications... 00:05:11.444 14:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:11.444 14:21:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:11.444 14:21:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:11.444 14:21:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:11.444 14:21:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61825 ]] 00:05:11.444 14:21:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61825 00:05:11.444 14:21:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:11.444 14:21:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.444 14:21:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61825 00:05:11.444 14:21:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.010 14:21:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.010 14:21:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.010 14:21:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61825 00:05:12.010 14:21:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:12.010 14:21:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:12.010 14:21:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:12.010 SPDK target shutdown done 00:05:12.010 14:21:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:12.010 Success 00:05:12.010 14:21:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:12.010 00:05:12.010 real 0m1.666s 00:05:12.010 user 0m1.579s 00:05:12.010 sys 0m0.304s 00:05:12.010 14:21:51 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.010 14:21:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.010 ************************************ 00:05:12.010 END TEST json_config_extra_key 00:05:12.010 ************************************ 00:05:12.010 14:21:51 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.010 14:21:51 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.010 14:21:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.010 14:21:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.010 14:21:51 -- common/autotest_common.sh@10 -- # set +x 00:05:12.010 ************************************ 00:05:12.010 START TEST alias_rpc 00:05:12.010 ************************************ 00:05:12.010 14:21:51 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.267 * Looking for test storage... 00:05:12.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:12.267 14:21:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:12.267 14:21:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61907 00:05:12.267 14:21:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61907 00:05:12.267 14:21:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.267 14:21:51 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61907 ']' 00:05:12.267 14:21:51 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.268 14:21:51 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.268 14:21:51 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.268 14:21:51 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.268 14:21:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.268 [2024-07-15 14:21:51.673108] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:12.268 [2024-07-15 14:21:51.673238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61907 ] 00:05:12.268 [2024-07-15 14:21:51.810493] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.525 [2024-07-15 14:21:51.890328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.090 14:21:52 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.090 14:21:52 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:13.090 14:21:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:13.656 14:21:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61907 00:05:13.656 14:21:52 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61907 ']' 00:05:13.656 14:21:52 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61907 00:05:13.656 14:21:52 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:13.656 14:21:53 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.656 14:21:53 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61907 00:05:13.656 killing process with pid 61907 00:05:13.656 14:21:53 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.656 14:21:53 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.656 14:21:53 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61907' 00:05:13.656 14:21:53 alias_rpc -- common/autotest_common.sh@967 -- # kill 61907 00:05:13.656 14:21:53 alias_rpc -- common/autotest_common.sh@972 -- # wait 61907 00:05:13.915 00:05:13.915 real 0m1.734s 00:05:13.915 user 0m2.183s 00:05:13.915 sys 0m0.318s 00:05:13.915 14:21:53 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.915 14:21:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.915 ************************************ 00:05:13.915 END TEST alias_rpc 00:05:13.915 ************************************ 00:05:13.915 14:21:53 -- common/autotest_common.sh@1142 -- # return 0 00:05:13.915 14:21:53 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:05:13.915 14:21:53 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:13.915 14:21:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.915 14:21:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.915 14:21:53 -- common/autotest_common.sh@10 -- # set +x 00:05:13.915 ************************************ 00:05:13.915 START TEST dpdk_mem_utility 00:05:13.915 ************************************ 00:05:13.915 14:21:53 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:13.915 * Looking for test storage... 00:05:13.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:13.915 14:21:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:13.915 14:21:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61994 00:05:13.915 14:21:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61994 00:05:13.915 14:21:53 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61994 ']' 00:05:13.915 14:21:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.915 14:21:53 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.915 14:21:53 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.915 14:21:53 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.915 14:21:53 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.915 14:21:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.915 [2024-07-15 14:21:53.465015] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:13.915 [2024-07-15 14:21:53.465128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61994 ] 00:05:14.173 [2024-07-15 14:21:53.604479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.173 [2024-07-15 14:21:53.672619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.109 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.109 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:15.109 14:21:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:15.109 14:21:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:15.109 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.109 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.109 { 00:05:15.109 "filename": "/tmp/spdk_mem_dump.txt" 00:05:15.109 } 00:05:15.109 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.109 14:21:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:15.109 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:15.109 1 heaps totaling size 814.000000 MiB 00:05:15.109 size: 814.000000 MiB heap id: 0 00:05:15.109 end heaps---------- 00:05:15.109 8 mempools totaling size 598.116089 MiB 00:05:15.109 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:15.109 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:15.109 size: 84.521057 MiB name: bdev_io_61994 00:05:15.109 size: 51.011292 MiB name: evtpool_61994 00:05:15.109 size: 50.003479 MiB name: msgpool_61994 00:05:15.109 size: 21.763794 MiB name: PDU_Pool 00:05:15.109 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:15.109 size: 0.026123 MiB name: Session_Pool 00:05:15.109 end mempools------- 00:05:15.109 6 memzones totaling size 4.142822 MiB 00:05:15.109 size: 1.000366 MiB name: RG_ring_0_61994 00:05:15.109 size: 1.000366 MiB name: RG_ring_1_61994 00:05:15.109 size: 1.000366 MiB name: RG_ring_4_61994 00:05:15.109 size: 1.000366 MiB name: RG_ring_5_61994 00:05:15.109 size: 0.125366 MiB name: RG_ring_2_61994 00:05:15.109 size: 0.015991 MiB name: RG_ring_3_61994 00:05:15.109 end memzones------- 00:05:15.109 14:21:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:15.109 heap id: 0 total size: 814.000000 MiB number of busy elements: 218 number of free elements: 15 00:05:15.109 list of free elements. size: 12.486938 MiB 00:05:15.109 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:15.109 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:15.109 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:15.109 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:15.109 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:15.109 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:15.109 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:15.110 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:15.110 element at address: 0x200000200000 with size: 0.837036 MiB 00:05:15.110 element at address: 0x20001aa00000 with size: 0.572815 MiB 00:05:15.110 element at address: 0x20000b200000 with size: 0.489807 MiB 00:05:15.110 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:15.110 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:15.110 element at address: 0x200027e00000 with size: 0.398682 MiB 00:05:15.110 element at address: 0x200003a00000 with size: 0.350769 MiB 00:05:15.110 list of standard malloc elements. size: 199.250488 MiB 00:05:15.110 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:15.110 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:15.110 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:15.110 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:15.110 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:15.110 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:15.110 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:15.110 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:15.110 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:15.110 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:15.110 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200027e66100 with size: 0.000183 MiB 00:05:15.110 element at address: 0x200027e661c0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6cdc0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:15.111 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:15.111 list of memzone associated elements. size: 602.262573 MiB 00:05:15.111 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:15.111 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:15.111 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:15.111 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:15.111 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:15.111 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61994_0 00:05:15.111 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:15.111 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61994_0 00:05:15.111 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:15.111 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61994_0 00:05:15.111 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:15.111 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:15.111 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:15.111 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:15.111 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:15.111 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61994 00:05:15.111 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:15.111 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61994 00:05:15.111 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:15.111 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61994 00:05:15.111 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:15.111 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:15.111 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:15.111 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:15.111 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:15.111 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:15.111 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:15.111 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:15.111 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:15.111 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61994 00:05:15.111 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:15.111 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61994 00:05:15.111 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:15.111 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61994 00:05:15.111 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:15.111 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61994 00:05:15.111 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:15.111 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61994 00:05:15.111 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:15.111 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:15.111 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:15.111 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:15.111 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:15.111 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:15.111 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:15.111 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61994 00:05:15.111 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:15.111 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:15.111 element at address: 0x200027e66280 with size: 0.023743 MiB 00:05:15.111 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:15.111 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:15.111 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61994 00:05:15.111 element at address: 0x200027e6c3c0 with size: 0.002441 MiB 00:05:15.111 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:15.111 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:15.111 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61994 00:05:15.111 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:15.111 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61994 00:05:15.111 element at address: 0x200027e6ce80 with size: 0.000305 MiB 00:05:15.111 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:15.111 14:21:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:15.111 14:21:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61994 00:05:15.111 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61994 ']' 00:05:15.111 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61994 00:05:15.111 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:15.111 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.111 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61994 00:05:15.111 killing process with pid 61994 00:05:15.111 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.111 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.111 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61994' 00:05:15.111 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61994 00:05:15.111 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61994 00:05:15.370 00:05:15.370 real 0m1.532s 00:05:15.370 user 0m1.786s 00:05:15.370 sys 0m0.312s 00:05:15.370 ************************************ 00:05:15.370 END TEST dpdk_mem_utility 00:05:15.370 ************************************ 00:05:15.370 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.370 14:21:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.370 14:21:54 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.370 14:21:54 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:15.370 14:21:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.370 14:21:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.370 14:21:54 -- common/autotest_common.sh@10 -- # set +x 00:05:15.370 ************************************ 00:05:15.370 START TEST event 00:05:15.370 ************************************ 00:05:15.370 14:21:54 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:15.628 * Looking for test storage... 00:05:15.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:15.628 14:21:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:15.628 14:21:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:15.628 14:21:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.628 14:21:54 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:15.628 14:21:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.628 14:21:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.628 ************************************ 00:05:15.628 START TEST event_perf 00:05:15.628 ************************************ 00:05:15.628 14:21:54 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.628 Running I/O for 1 seconds...[2024-07-15 14:21:55.013372] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:15.628 [2024-07-15 14:21:55.014144] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62083 ] 00:05:15.628 [2024-07-15 14:21:55.153437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.887 [2024-07-15 14:21:55.232828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.887 [2024-07-15 14:21:55.232974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.887 Running I/O for 1 seconds...[2024-07-15 14:21:55.233574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.887 [2024-07-15 14:21:55.233615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.823 00:05:16.823 lcore 0: 194134 00:05:16.823 lcore 1: 194135 00:05:16.823 lcore 2: 194136 00:05:16.823 lcore 3: 194133 00:05:16.823 done. 00:05:16.823 00:05:16.823 real 0m1.315s 00:05:16.823 user 0m4.136s 00:05:16.823 sys 0m0.052s 00:05:16.823 14:21:56 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.823 14:21:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.823 ************************************ 00:05:16.823 END TEST event_perf 00:05:16.823 ************************************ 00:05:16.823 14:21:56 event -- common/autotest_common.sh@1142 -- # return 0 00:05:16.823 14:21:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:16.823 14:21:56 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:16.823 14:21:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.823 14:21:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.823 ************************************ 00:05:16.823 START TEST event_reactor 00:05:16.823 ************************************ 00:05:16.823 14:21:56 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:16.823 [2024-07-15 14:21:56.380018] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:16.823 [2024-07-15 14:21:56.380099] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62122 ] 00:05:17.082 [2024-07-15 14:21:56.515953] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.082 [2024-07-15 14:21:56.586093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.457 test_start 00:05:18.457 oneshot 00:05:18.457 tick 100 00:05:18.457 tick 100 00:05:18.457 tick 250 00:05:18.457 tick 100 00:05:18.457 tick 100 00:05:18.457 tick 250 00:05:18.457 tick 500 00:05:18.457 tick 100 00:05:18.457 tick 100 00:05:18.457 tick 100 00:05:18.457 tick 250 00:05:18.457 tick 100 00:05:18.457 tick 100 00:05:18.457 test_end 00:05:18.457 ************************************ 00:05:18.457 END TEST event_reactor 00:05:18.457 ************************************ 00:05:18.457 00:05:18.457 real 0m1.301s 00:05:18.457 user 0m1.156s 00:05:18.457 sys 0m0.037s 00:05:18.457 14:21:57 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.457 14:21:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:18.457 14:21:57 event -- common/autotest_common.sh@1142 -- # return 0 00:05:18.457 14:21:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.457 14:21:57 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:18.457 14:21:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.457 14:21:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.457 ************************************ 00:05:18.457 START TEST event_reactor_perf 00:05:18.457 ************************************ 00:05:18.457 14:21:57 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.457 [2024-07-15 14:21:57.734944] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:18.457 [2024-07-15 14:21:57.735073] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62152 ] 00:05:18.457 [2024-07-15 14:21:57.873936] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.457 [2024-07-15 14:21:57.935977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.829 test_start 00:05:19.829 test_end 00:05:19.829 Performance: 350879 events per second 00:05:19.829 ************************************ 00:05:19.829 END TEST event_reactor_perf 00:05:19.829 00:05:19.829 real 0m1.290s 00:05:19.829 user 0m1.143s 00:05:19.829 sys 0m0.041s 00:05:19.830 14:21:59 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.830 14:21:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.830 ************************************ 00:05:19.830 14:21:59 event -- common/autotest_common.sh@1142 -- # return 0 00:05:19.830 14:21:59 event -- event/event.sh@49 -- # uname -s 00:05:19.830 14:21:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:19.830 14:21:59 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:19.830 14:21:59 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.830 14:21:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.830 14:21:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.830 ************************************ 00:05:19.830 START TEST event_scheduler 00:05:19.830 ************************************ 00:05:19.830 14:21:59 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:19.830 * Looking for test storage... 00:05:19.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:19.830 14:21:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:19.830 14:21:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62213 00:05:19.830 14:21:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:19.830 14:21:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.830 14:21:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62213 00:05:19.830 14:21:59 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62213 ']' 00:05:19.830 14:21:59 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.830 14:21:59 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.830 14:21:59 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.830 14:21:59 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.830 14:21:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.830 [2024-07-15 14:21:59.186774] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:19.830 [2024-07-15 14:21:59.187525] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62213 ] 00:05:19.830 [2024-07-15 14:21:59.318871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.830 [2024-07-15 14:21:59.397622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.830 [2024-07-15 14:21:59.397747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.830 [2024-07-15 14:21:59.397874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.830 [2024-07-15 14:21:59.397881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.763 14:22:00 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.763 14:22:00 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:20.763 14:22:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:20.763 14:22:00 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.763 POWER: Cannot set governor of lcore 0 to userspace 00:05:20.763 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.763 POWER: Cannot set governor of lcore 0 to performance 00:05:20.763 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.763 POWER: Cannot set governor of lcore 0 to userspace 00:05:20.763 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:20.763 POWER: Cannot set governor of lcore 0 to userspace 00:05:20.763 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:20.763 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:20.763 POWER: Unable to set Power Management Environment for lcore 0 00:05:20.763 [2024-07-15 14:22:00.181349] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:20.763 [2024-07-15 14:22:00.181454] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:20.763 [2024-07-15 14:22:00.181497] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:20.763 [2024-07-15 14:22:00.181631] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:20.763 [2024-07-15 14:22:00.181677] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:20.763 [2024-07-15 14:22:00.181736] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:20.763 14:22:00 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.763 14:22:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:20.763 14:22:00 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 [2024-07-15 14:22:00.234436] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:20.763 14:22:00 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.763 14:22:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:20.763 14:22:00 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.763 14:22:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 ************************************ 00:05:20.763 START TEST scheduler_create_thread 00:05:20.763 ************************************ 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 2 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 3 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 4 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 5 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 6 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 7 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 8 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 9 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 10 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.763 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.021 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.021 14:22:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:21.021 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.021 14:22:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.434 14:22:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.434 14:22:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:22.434 14:22:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:22.434 14:22:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.434 14:22:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.367 ************************************ 00:05:23.367 END TEST scheduler_create_thread 00:05:23.367 ************************************ 00:05:23.367 14:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.367 00:05:23.367 real 0m2.616s 00:05:23.367 user 0m0.020s 00:05:23.367 sys 0m0.005s 00:05:23.367 14:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.368 14:22:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.368 14:22:02 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:23.368 14:22:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:23.368 14:22:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62213 00:05:23.368 14:22:02 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62213 ']' 00:05:23.368 14:22:02 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62213 00:05:23.368 14:22:02 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:23.368 14:22:02 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.368 14:22:02 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62213 00:05:23.368 killing process with pid 62213 00:05:23.368 14:22:02 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:23.368 14:22:02 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:23.368 14:22:02 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62213' 00:05:23.368 14:22:02 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62213 00:05:23.368 14:22:02 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62213 00:05:23.935 [2024-07-15 14:22:03.342283] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:23.935 00:05:23.935 real 0m4.458s 00:05:23.935 user 0m8.664s 00:05:23.935 sys 0m0.298s 00:05:23.935 14:22:03 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.935 14:22:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.935 ************************************ 00:05:23.935 END TEST event_scheduler 00:05:23.935 ************************************ 00:05:24.193 14:22:03 event -- common/autotest_common.sh@1142 -- # return 0 00:05:24.193 14:22:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:24.193 14:22:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:24.193 14:22:03 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.193 14:22:03 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.193 14:22:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.193 ************************************ 00:05:24.193 START TEST app_repeat 00:05:24.193 ************************************ 00:05:24.193 14:22:03 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:24.193 Process app_repeat pid: 62331 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62331 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62331' 00:05:24.193 spdk_app_start Round 0 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:24.193 14:22:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62331 /var/tmp/spdk-nbd.sock 00:05:24.193 14:22:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62331 ']' 00:05:24.193 14:22:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.193 14:22:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.193 14:22:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.193 14:22:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.193 14:22:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.193 [2024-07-15 14:22:03.602231] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:24.193 [2024-07-15 14:22:03.602321] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62331 ] 00:05:24.193 [2024-07-15 14:22:03.738972] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.451 [2024-07-15 14:22:03.809390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.451 [2024-07-15 14:22:03.809403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.451 14:22:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.451 14:22:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:24.451 14:22:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.708 Malloc0 00:05:24.708 14:22:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.965 Malloc1 00:05:24.965 14:22:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.965 14:22:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.223 /dev/nbd0 00:05:25.223 14:22:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.223 14:22:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.223 1+0 records in 00:05:25.223 1+0 records out 00:05:25.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170902 s, 24.0 MB/s 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:25.223 14:22:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:25.223 14:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.223 14:22:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.223 14:22:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.481 /dev/nbd1 00:05:25.481 14:22:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.481 14:22:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.481 14:22:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:25.481 14:22:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:25.481 14:22:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:25.481 14:22:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:25.481 14:22:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:25.481 14:22:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:25.482 14:22:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:25.482 14:22:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:25.482 14:22:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.482 1+0 records in 00:05:25.482 1+0 records out 00:05:25.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304775 s, 13.4 MB/s 00:05:25.482 14:22:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.482 14:22:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:25.482 14:22:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.482 14:22:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:25.482 14:22:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:25.482 14:22:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.482 14:22:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.482 14:22:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.482 14:22:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.482 14:22:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.047 14:22:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.047 { 00:05:26.047 "bdev_name": "Malloc0", 00:05:26.047 "nbd_device": "/dev/nbd0" 00:05:26.047 }, 00:05:26.047 { 00:05:26.047 "bdev_name": "Malloc1", 00:05:26.047 "nbd_device": "/dev/nbd1" 00:05:26.047 } 00:05:26.047 ]' 00:05:26.047 14:22:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.047 { 00:05:26.047 "bdev_name": "Malloc0", 00:05:26.047 "nbd_device": "/dev/nbd0" 00:05:26.047 }, 00:05:26.047 { 00:05:26.047 "bdev_name": "Malloc1", 00:05:26.047 "nbd_device": "/dev/nbd1" 00:05:26.047 } 00:05:26.047 ]' 00:05:26.047 14:22:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.047 14:22:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.047 /dev/nbd1' 00:05:26.047 14:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.047 /dev/nbd1' 00:05:26.047 14:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.047 14:22:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.047 14:22:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.047 14:22:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.047 14:22:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.047 14:22:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.047 14:22:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.048 256+0 records in 00:05:26.048 256+0 records out 00:05:26.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00902969 s, 116 MB/s 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.048 256+0 records in 00:05:26.048 256+0 records out 00:05:26.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240417 s, 43.6 MB/s 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.048 256+0 records in 00:05:26.048 256+0 records out 00:05:26.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0338576 s, 31.0 MB/s 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.048 14:22:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.305 14:22:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.305 14:22:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.305 14:22:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.305 14:22:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.305 14:22:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.305 14:22:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.305 14:22:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.305 14:22:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.305 14:22:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.305 14:22:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.562 14:22:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.562 14:22:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.562 14:22:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.562 14:22:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.562 14:22:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.562 14:22:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.562 14:22:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.562 14:22:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.562 14:22:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.562 14:22:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.562 14:22:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.183 14:22:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.183 14:22:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.183 14:22:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.183 14:22:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.183 14:22:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.183 14:22:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.183 14:22:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.183 14:22:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.183 14:22:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.183 14:22:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.183 14:22:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.183 14:22:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.183 14:22:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.454 14:22:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.454 [2024-07-15 14:22:06.902211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.454 [2024-07-15 14:22:06.959136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.454 [2024-07-15 14:22:06.959148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.454 [2024-07-15 14:22:06.987516] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.454 [2024-07-15 14:22:06.987573] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.730 14:22:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.730 14:22:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:30.730 spdk_app_start Round 1 00:05:30.730 14:22:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62331 /var/tmp/spdk-nbd.sock 00:05:30.730 14:22:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62331 ']' 00:05:30.730 14:22:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.730 14:22:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.730 14:22:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.730 14:22:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.730 14:22:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.730 14:22:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.730 14:22:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:30.730 14:22:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.730 Malloc0 00:05:30.730 14:22:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.988 Malloc1 00:05:30.988 14:22:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.988 14:22:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.554 /dev/nbd0 00:05:31.554 14:22:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.554 14:22:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.554 1+0 records in 00:05:31.554 1+0 records out 00:05:31.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276301 s, 14.8 MB/s 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:31.554 14:22:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:31.554 14:22:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.554 14:22:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.554 14:22:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.554 /dev/nbd1 00:05:31.554 14:22:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.811 14:22:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.811 1+0 records in 00:05:31.811 1+0 records out 00:05:31.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394594 s, 10.4 MB/s 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:31.811 14:22:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:31.811 14:22:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.811 14:22:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.811 14:22:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.811 14:22:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.811 14:22:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.811 14:22:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.811 { 00:05:31.811 "bdev_name": "Malloc0", 00:05:31.811 "nbd_device": "/dev/nbd0" 00:05:31.811 }, 00:05:31.811 { 00:05:31.811 "bdev_name": "Malloc1", 00:05:31.811 "nbd_device": "/dev/nbd1" 00:05:31.811 } 00:05:31.811 ]' 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.069 { 00:05:32.069 "bdev_name": "Malloc0", 00:05:32.069 "nbd_device": "/dev/nbd0" 00:05:32.069 }, 00:05:32.069 { 00:05:32.069 "bdev_name": "Malloc1", 00:05:32.069 "nbd_device": "/dev/nbd1" 00:05:32.069 } 00:05:32.069 ]' 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.069 /dev/nbd1' 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.069 /dev/nbd1' 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.069 256+0 records in 00:05:32.069 256+0 records out 00:05:32.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00822202 s, 128 MB/s 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.069 14:22:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.069 256+0 records in 00:05:32.070 256+0 records out 00:05:32.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260316 s, 40.3 MB/s 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.070 256+0 records in 00:05:32.070 256+0 records out 00:05:32.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0366179 s, 28.6 MB/s 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.070 14:22:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.328 14:22:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.328 14:22:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.328 14:22:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.328 14:22:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.328 14:22:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.328 14:22:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.328 14:22:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.328 14:22:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.328 14:22:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.328 14:22:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.586 14:22:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.586 14:22:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.586 14:22:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.586 14:22:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.586 14:22:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.586 14:22:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.586 14:22:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.586 14:22:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.586 14:22:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.586 14:22:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.586 14:22:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.844 14:22:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.844 14:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.844 14:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.844 14:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.844 14:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.844 14:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.844 14:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.844 14:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.844 14:22:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.844 14:22:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.844 14:22:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.844 14:22:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.844 14:22:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.101 14:22:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.360 [2024-07-15 14:22:12.799358] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.360 [2024-07-15 14:22:12.856100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.360 [2024-07-15 14:22:12.856109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.360 [2024-07-15 14:22:12.885294] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.360 [2024-07-15 14:22:12.885353] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.642 14:22:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.642 spdk_app_start Round 2 00:05:36.642 14:22:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:36.642 14:22:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62331 /var/tmp/spdk-nbd.sock 00:05:36.642 14:22:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62331 ']' 00:05:36.642 14:22:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.642 14:22:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.642 14:22:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.642 14:22:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.642 14:22:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.642 14:22:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.642 14:22:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:36.642 14:22:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.922 Malloc0 00:05:36.922 14:22:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.187 Malloc1 00:05:37.187 14:22:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.187 14:22:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.445 /dev/nbd0 00:05:37.445 14:22:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.445 14:22:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.445 1+0 records in 00:05:37.445 1+0 records out 00:05:37.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321671 s, 12.7 MB/s 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:37.445 14:22:16 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:37.445 14:22:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.445 14:22:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.445 14:22:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.703 /dev/nbd1 00:05:37.703 14:22:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.703 14:22:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.703 1+0 records in 00:05:37.703 1+0 records out 00:05:37.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252663 s, 16.2 MB/s 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:37.703 14:22:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:37.703 14:22:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.703 14:22:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.703 14:22:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.703 14:22:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.703 14:22:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.960 { 00:05:37.960 "bdev_name": "Malloc0", 00:05:37.960 "nbd_device": "/dev/nbd0" 00:05:37.960 }, 00:05:37.960 { 00:05:37.960 "bdev_name": "Malloc1", 00:05:37.960 "nbd_device": "/dev/nbd1" 00:05:37.960 } 00:05:37.960 ]' 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.960 { 00:05:37.960 "bdev_name": "Malloc0", 00:05:37.960 "nbd_device": "/dev/nbd0" 00:05:37.960 }, 00:05:37.960 { 00:05:37.960 "bdev_name": "Malloc1", 00:05:37.960 "nbd_device": "/dev/nbd1" 00:05:37.960 } 00:05:37.960 ]' 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.960 /dev/nbd1' 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.960 /dev/nbd1' 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.960 256+0 records in 00:05:37.960 256+0 records out 00:05:37.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0085996 s, 122 MB/s 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.960 14:22:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.219 256+0 records in 00:05:38.219 256+0 records out 00:05:38.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265742 s, 39.5 MB/s 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.219 256+0 records in 00:05:38.219 256+0 records out 00:05:38.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285904 s, 36.7 MB/s 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.219 14:22:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.479 14:22:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.479 14:22:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.479 14:22:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.479 14:22:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.479 14:22:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.479 14:22:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.479 14:22:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.479 14:22:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.479 14:22:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.479 14:22:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.739 14:22:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.739 14:22:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.739 14:22:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.739 14:22:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.739 14:22:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.739 14:22:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.739 14:22:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.739 14:22:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.739 14:22:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.739 14:22:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.739 14:22:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.002 14:22:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.002 14:22:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.002 14:22:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.002 14:22:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.002 14:22:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.002 14:22:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.002 14:22:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.002 14:22:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.002 14:22:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.002 14:22:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.002 14:22:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.002 14:22:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.002 14:22:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.581 14:22:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.581 [2024-07-15 14:22:18.987756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.581 [2024-07-15 14:22:19.044870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.581 [2024-07-15 14:22:19.044879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.581 [2024-07-15 14:22:19.073517] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.582 [2024-07-15 14:22:19.073591] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.877 14:22:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62331 /var/tmp/spdk-nbd.sock 00:05:42.877 14:22:21 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62331 ']' 00:05:42.877 14:22:21 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.877 14:22:21 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.877 14:22:21 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.877 14:22:21 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.877 14:22:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:42.877 14:22:22 event.app_repeat -- event/event.sh@39 -- # killprocess 62331 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62331 ']' 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62331 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62331 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.877 killing process with pid 62331 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62331' 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62331 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62331 00:05:42.877 spdk_app_start is called in Round 0. 00:05:42.877 Shutdown signal received, stop current app iteration 00:05:42.877 Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 reinitialization... 00:05:42.877 spdk_app_start is called in Round 1. 00:05:42.877 Shutdown signal received, stop current app iteration 00:05:42.877 Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 reinitialization... 00:05:42.877 spdk_app_start is called in Round 2. 00:05:42.877 Shutdown signal received, stop current app iteration 00:05:42.877 Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 reinitialization... 00:05:42.877 spdk_app_start is called in Round 3. 00:05:42.877 Shutdown signal received, stop current app iteration 00:05:42.877 14:22:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:42.877 14:22:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:42.877 00:05:42.877 real 0m18.754s 00:05:42.877 user 0m42.753s 00:05:42.877 sys 0m2.816s 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.877 14:22:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.877 ************************************ 00:05:42.877 END TEST app_repeat 00:05:42.877 ************************************ 00:05:42.877 14:22:22 event -- common/autotest_common.sh@1142 -- # return 0 00:05:42.877 14:22:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:42.877 14:22:22 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:42.877 14:22:22 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.877 14:22:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.877 14:22:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.877 ************************************ 00:05:42.877 START TEST cpu_locks 00:05:42.877 ************************************ 00:05:42.877 14:22:22 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:42.877 * Looking for test storage... 00:05:42.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.877 14:22:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:42.877 14:22:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:42.877 14:22:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:42.877 14:22:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:42.877 14:22:22 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.877 14:22:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.877 14:22:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.877 ************************************ 00:05:42.877 START TEST default_locks 00:05:42.877 ************************************ 00:05:42.877 14:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:42.877 14:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62942 00:05:42.877 14:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62942 00:05:42.877 14:22:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.877 14:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62942 ']' 00:05:42.877 14:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.877 14:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.877 14:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.877 14:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.877 14:22:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.136 [2024-07-15 14:22:22.531595] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:43.136 [2024-07-15 14:22:22.532388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62942 ] 00:05:43.136 [2024-07-15 14:22:22.669525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.394 [2024-07-15 14:22:22.737980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.960 14:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.960 14:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:43.960 14:22:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62942 00:05:43.960 14:22:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62942 00:05:43.960 14:22:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.525 14:22:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62942 00:05:44.526 14:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62942 ']' 00:05:44.526 14:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62942 00:05:44.526 14:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:44.526 14:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.526 14:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62942 00:05:44.526 14:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.526 14:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.526 14:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62942' 00:05:44.526 killing process with pid 62942 00:05:44.526 14:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62942 00:05:44.526 14:22:23 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62942 00:05:44.783 14:22:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62942 00:05:44.783 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:44.783 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62942 00:05:44.783 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:44.783 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62942 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62942 ']' 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.784 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62942) - No such process 00:05:44.784 ERROR: process (pid: 62942) is no longer running 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.784 00:05:44.784 real 0m1.749s 00:05:44.784 user 0m2.016s 00:05:44.784 sys 0m0.455s 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.784 14:22:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.784 ************************************ 00:05:44.784 END TEST default_locks 00:05:44.784 ************************************ 00:05:44.784 14:22:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:44.784 14:22:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:44.784 14:22:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.784 14:22:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.784 14:22:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.784 ************************************ 00:05:44.784 START TEST default_locks_via_rpc 00:05:44.784 ************************************ 00:05:44.784 14:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:44.784 14:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63006 00:05:44.784 14:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 63006 00:05:44.784 14:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63006 ']' 00:05:44.784 14:22:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.784 14:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.784 14:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.784 14:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.784 14:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.784 14:22:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.784 [2024-07-15 14:22:24.327678] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:44.784 [2024-07-15 14:22:24.327786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63006 ] 00:05:45.041 [2024-07-15 14:22:24.462740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.041 [2024-07-15 14:22:24.532437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 63006 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 63006 00:05:45.973 14:22:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.231 14:22:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 63006 00:05:46.231 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 63006 ']' 00:05:46.231 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 63006 00:05:46.231 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:46.231 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.231 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63006 00:05:46.231 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.231 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.231 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63006' 00:05:46.231 killing process with pid 63006 00:05:46.231 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 63006 00:05:46.231 14:22:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 63006 00:05:46.578 00:05:46.578 real 0m1.804s 00:05:46.578 user 0m2.053s 00:05:46.578 sys 0m0.499s 00:05:46.579 14:22:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.579 14:22:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.579 ************************************ 00:05:46.579 END TEST default_locks_via_rpc 00:05:46.579 ************************************ 00:05:46.579 14:22:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:46.579 14:22:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:46.579 14:22:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.579 14:22:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.579 14:22:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.579 ************************************ 00:05:46.579 START TEST non_locking_app_on_locked_coremask 00:05:46.579 ************************************ 00:05:46.579 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:46.579 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63075 00:05:46.579 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63075 /var/tmp/spdk.sock 00:05:46.579 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.579 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63075 ']' 00:05:46.579 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.579 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.579 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.579 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.579 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.838 [2024-07-15 14:22:26.189955] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:46.838 [2024-07-15 14:22:26.190064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63075 ] 00:05:46.838 [2024-07-15 14:22:26.327187] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.838 [2024-07-15 14:22:26.384572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.096 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.096 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:47.096 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63084 00:05:47.096 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:47.096 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63084 /var/tmp/spdk2.sock 00:05:47.096 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63084 ']' 00:05:47.096 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.096 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.096 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.096 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.096 14:22:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.096 [2024-07-15 14:22:26.612784] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:47.096 [2024-07-15 14:22:26.612878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63084 ] 00:05:47.354 [2024-07-15 14:22:26.758301] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.354 [2024-07-15 14:22:26.758363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.354 [2024-07-15 14:22:26.876258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.289 14:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.289 14:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:48.289 14:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63075 00:05:48.289 14:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63075 00:05:48.289 14:22:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.246 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63075 00:05:49.246 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63075 ']' 00:05:49.246 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63075 00:05:49.246 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:49.246 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.246 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63075 00:05:49.246 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.246 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.246 killing process with pid 63075 00:05:49.246 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63075' 00:05:49.246 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63075 00:05:49.246 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63075 00:05:49.505 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63084 00:05:49.505 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63084 ']' 00:05:49.505 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63084 00:05:49.505 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:49.505 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.505 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63084 00:05:49.505 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.505 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.505 killing process with pid 63084 00:05:49.505 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63084' 00:05:49.505 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63084 00:05:49.505 14:22:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63084 00:05:49.763 00:05:49.763 real 0m3.116s 00:05:49.763 user 0m3.671s 00:05:49.763 sys 0m0.860s 00:05:49.763 14:22:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.763 14:22:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.763 ************************************ 00:05:49.763 END TEST non_locking_app_on_locked_coremask 00:05:49.763 ************************************ 00:05:49.763 14:22:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:49.763 14:22:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:49.763 14:22:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.763 14:22:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.763 14:22:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.763 ************************************ 00:05:49.763 START TEST locking_app_on_unlocked_coremask 00:05:49.763 ************************************ 00:05:49.763 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:49.763 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63163 00:05:49.763 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63163 /var/tmp/spdk.sock 00:05:49.763 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:49.763 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63163 ']' 00:05:49.763 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.763 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.763 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.763 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.763 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.022 [2024-07-15 14:22:29.367466] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:50.022 [2024-07-15 14:22:29.367581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63163 ] 00:05:50.022 [2024-07-15 14:22:29.507051] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.022 [2024-07-15 14:22:29.507109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.022 [2024-07-15 14:22:29.577470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.282 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.282 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:50.282 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63172 00:05:50.282 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:50.282 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63172 /var/tmp/spdk2.sock 00:05:50.282 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63172 ']' 00:05:50.282 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.282 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.282 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.282 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.282 14:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.282 [2024-07-15 14:22:29.820998] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:50.282 [2024-07-15 14:22:29.821093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63172 ] 00:05:50.541 [2024-07-15 14:22:29.967484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.541 [2024-07-15 14:22:30.082395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.498 14:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.498 14:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:51.498 14:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63172 00:05:51.498 14:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63172 00:05:51.498 14:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.084 14:22:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63163 00:05:52.084 14:22:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63163 ']' 00:05:52.084 14:22:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63163 00:05:52.084 14:22:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:52.084 14:22:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.084 14:22:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63163 00:05:52.084 14:22:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.084 14:22:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.084 killing process with pid 63163 00:05:52.084 14:22:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63163' 00:05:52.084 14:22:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63163 00:05:52.084 14:22:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63163 00:05:52.652 14:22:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63172 00:05:52.652 14:22:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63172 ']' 00:05:52.652 14:22:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63172 00:05:52.652 14:22:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:52.652 14:22:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.652 14:22:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63172 00:05:52.652 14:22:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.652 14:22:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.652 killing process with pid 63172 00:05:52.652 14:22:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63172' 00:05:52.652 14:22:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63172 00:05:52.652 14:22:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63172 00:05:52.912 00:05:52.912 real 0m3.090s 00:05:52.912 user 0m3.601s 00:05:52.912 sys 0m0.879s 00:05:52.912 14:22:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.912 14:22:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.912 ************************************ 00:05:52.912 END TEST locking_app_on_unlocked_coremask 00:05:52.912 ************************************ 00:05:52.912 14:22:32 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:52.912 14:22:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:52.912 14:22:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.912 14:22:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.912 14:22:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.912 ************************************ 00:05:52.912 START TEST locking_app_on_locked_coremask 00:05:52.912 ************************************ 00:05:52.912 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:52.912 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63252 00:05:52.912 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63252 /var/tmp/spdk.sock 00:05:52.912 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.912 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63252 ']' 00:05:52.912 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.912 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.912 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.912 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.912 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.912 [2024-07-15 14:22:32.490568] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:52.912 [2024-07-15 14:22:32.490678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63252 ] 00:05:53.171 [2024-07-15 14:22:32.628006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.171 [2024-07-15 14:22:32.692272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.429 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.429 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:53.429 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63261 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63261 /var/tmp/spdk2.sock 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63261 /var/tmp/spdk2.sock 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63261 /var/tmp/spdk2.sock 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63261 ']' 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.430 14:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.430 [2024-07-15 14:22:32.931598] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:53.430 [2024-07-15 14:22:32.931731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63261 ] 00:05:53.688 [2024-07-15 14:22:33.081065] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63252 has claimed it. 00:05:53.688 [2024-07-15 14:22:33.081131] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:54.278 ERROR: process (pid: 63261) is no longer running 00:05:54.278 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63261) - No such process 00:05:54.278 14:22:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.278 14:22:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:54.278 14:22:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:54.278 14:22:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:54.278 14:22:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:54.278 14:22:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:54.278 14:22:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63252 00:05:54.278 14:22:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63252 00:05:54.278 14:22:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.537 14:22:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63252 00:05:54.537 14:22:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63252 ']' 00:05:54.537 14:22:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63252 00:05:54.537 14:22:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:54.537 14:22:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.537 14:22:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63252 00:05:54.537 14:22:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.537 14:22:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.537 14:22:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63252' 00:05:54.537 killing process with pid 63252 00:05:54.537 14:22:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63252 00:05:54.537 14:22:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63252 00:05:54.796 00:05:54.796 real 0m1.922s 00:05:54.796 user 0m2.264s 00:05:54.796 sys 0m0.535s 00:05:54.796 14:22:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.796 14:22:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.796 ************************************ 00:05:54.796 END TEST locking_app_on_locked_coremask 00:05:54.796 ************************************ 00:05:54.796 14:22:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:54.796 14:22:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:54.796 14:22:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.796 14:22:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.796 14:22:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.055 ************************************ 00:05:55.055 START TEST locking_overlapped_coremask 00:05:55.055 ************************************ 00:05:55.055 14:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:55.055 14:22:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63318 00:05:55.056 14:22:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:55.056 14:22:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63318 /var/tmp/spdk.sock 00:05:55.056 14:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63318 ']' 00:05:55.056 14:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.056 14:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.056 14:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.056 14:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.056 14:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.056 [2024-07-15 14:22:34.457738] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:55.056 [2024-07-15 14:22:34.457827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63318 ] 00:05:55.056 [2024-07-15 14:22:34.595011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.314 [2024-07-15 14:22:34.655767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.314 [2024-07-15 14:22:34.655885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.314 [2024-07-15 14:22:34.655890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63348 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63348 /var/tmp/spdk2.sock 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63348 /var/tmp/spdk2.sock 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:55.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63348 /var/tmp/spdk2.sock 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63348 ']' 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.880 14:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.139 [2024-07-15 14:22:35.506160] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:56.139 [2024-07-15 14:22:35.506243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63348 ] 00:05:56.139 [2024-07-15 14:22:35.651241] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63318 has claimed it. 00:05:56.139 [2024-07-15 14:22:35.651307] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.706 ERROR: process (pid: 63348) is no longer running 00:05:56.706 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63348) - No such process 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63318 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63318 ']' 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63318 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63318 00:05:56.706 killing process with pid 63318 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63318' 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63318 00:05:56.706 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63318 00:05:56.965 ************************************ 00:05:56.965 END TEST locking_overlapped_coremask 00:05:56.965 ************************************ 00:05:56.965 00:05:56.965 real 0m2.098s 00:05:56.965 user 0m6.084s 00:05:56.965 sys 0m0.308s 00:05:56.965 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.965 14:22:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.965 14:22:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:56.965 14:22:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:56.965 14:22:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.965 14:22:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.965 14:22:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.965 ************************************ 00:05:56.965 START TEST locking_overlapped_coremask_via_rpc 00:05:56.965 ************************************ 00:05:56.965 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:56.965 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63394 00:05:56.965 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63394 /var/tmp/spdk.sock 00:05:56.965 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:56.965 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63394 ']' 00:05:56.965 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.965 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.965 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.965 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.965 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.223 [2024-07-15 14:22:36.592664] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:57.223 [2024-07-15 14:22:36.592757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63394 ] 00:05:57.223 [2024-07-15 14:22:36.728720] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.223 [2024-07-15 14:22:36.728777] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.223 [2024-07-15 14:22:36.802767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.223 [2024-07-15 14:22:36.802854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.223 [2024-07-15 14:22:36.802843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.481 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.481 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:57.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.481 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63405 00:05:57.481 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:57.481 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63405 /var/tmp/spdk2.sock 00:05:57.481 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63405 ']' 00:05:57.481 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.481 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.482 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.482 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.482 14:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.482 [2024-07-15 14:22:37.038718] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:57.482 [2024-07-15 14:22:37.038988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63405 ] 00:05:57.740 [2024-07-15 14:22:37.187772] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.740 [2024-07-15 14:22:37.187825] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.740 [2024-07-15 14:22:37.313578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.740 [2024-07-15 14:22:37.313683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:57.740 [2024-07-15 14:22:37.313684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.674 [2024-07-15 14:22:38.112878] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63394 has claimed it. 00:05:58.674 2024/07/15 14:22:38 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:58.674 request: 00:05:58.674 { 00:05:58.674 "method": "framework_enable_cpumask_locks", 00:05:58.674 "params": {} 00:05:58.674 } 00:05:58.674 Got JSON-RPC error response 00:05:58.674 GoRPCClient: error on JSON-RPC call 00:05:58.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63394 /var/tmp/spdk.sock 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63394 ']' 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.674 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.933 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.933 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:58.933 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63405 /var/tmp/spdk2.sock 00:05:58.933 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63405 ']' 00:05:58.933 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.933 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.933 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.933 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.933 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.192 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.192 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:59.192 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:59.192 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.192 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.192 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.192 00:05:59.192 real 0m2.162s 00:05:59.192 user 0m1.276s 00:05:59.192 sys 0m0.198s 00:05:59.192 ************************************ 00:05:59.192 END TEST locking_overlapped_coremask_via_rpc 00:05:59.192 ************************************ 00:05:59.192 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.192 14:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.192 14:22:38 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:59.192 14:22:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:59.192 14:22:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63394 ]] 00:05:59.192 14:22:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63394 00:05:59.192 14:22:38 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63394 ']' 00:05:59.192 14:22:38 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63394 00:05:59.192 14:22:38 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:59.192 14:22:38 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.192 14:22:38 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63394 00:05:59.192 killing process with pid 63394 00:05:59.192 14:22:38 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.192 14:22:38 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.192 14:22:38 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63394' 00:05:59.192 14:22:38 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63394 00:05:59.192 14:22:38 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63394 00:05:59.450 14:22:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63405 ]] 00:05:59.450 14:22:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63405 00:05:59.450 14:22:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63405 ']' 00:05:59.450 14:22:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63405 00:05:59.450 14:22:39 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:59.450 14:22:39 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.450 14:22:39 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63405 00:05:59.450 killing process with pid 63405 00:05:59.450 14:22:39 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:59.450 14:22:39 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:59.450 14:22:39 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63405' 00:05:59.450 14:22:39 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63405 00:05:59.450 14:22:39 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63405 00:05:59.707 14:22:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.707 Process with pid 63394 is not found 00:05:59.707 Process with pid 63405 is not found 00:05:59.707 14:22:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:59.707 14:22:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63394 ]] 00:05:59.707 14:22:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63394 00:05:59.707 14:22:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63394 ']' 00:05:59.707 14:22:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63394 00:05:59.707 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63394) - No such process 00:05:59.707 14:22:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63394 is not found' 00:05:59.707 14:22:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63405 ]] 00:05:59.707 14:22:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63405 00:05:59.707 14:22:39 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63405 ']' 00:05:59.707 14:22:39 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63405 00:05:59.707 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63405) - No such process 00:05:59.707 14:22:39 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63405 is not found' 00:05:59.707 14:22:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.707 00:05:59.707 real 0m16.913s 00:05:59.707 user 0m31.484s 00:05:59.707 sys 0m4.353s 00:05:59.707 14:22:39 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.707 ************************************ 00:05:59.707 END TEST cpu_locks 00:05:59.707 ************************************ 00:05:59.707 14:22:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.965 14:22:39 event -- common/autotest_common.sh@1142 -- # return 0 00:05:59.965 ************************************ 00:05:59.965 END TEST event 00:05:59.965 ************************************ 00:05:59.965 00:05:59.965 real 0m44.421s 00:05:59.965 user 1m29.463s 00:05:59.965 sys 0m7.830s 00:05:59.965 14:22:39 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.965 14:22:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.965 14:22:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:59.965 14:22:39 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:59.965 14:22:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.965 14:22:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.965 14:22:39 -- common/autotest_common.sh@10 -- # set +x 00:05:59.965 ************************************ 00:05:59.965 START TEST thread 00:05:59.965 ************************************ 00:05:59.965 14:22:39 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:59.965 * Looking for test storage... 00:05:59.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:59.965 14:22:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.965 14:22:39 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:59.965 14:22:39 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.965 14:22:39 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.965 ************************************ 00:05:59.965 START TEST thread_poller_perf 00:05:59.965 ************************************ 00:05:59.965 14:22:39 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.965 [2024-07-15 14:22:39.479962] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:05:59.965 [2024-07-15 14:22:39.480058] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63553 ] 00:06:00.222 [2024-07-15 14:22:39.611139] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.222 [2024-07-15 14:22:39.672770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.222 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:01.592 ====================================== 00:06:01.592 busy:2209036766 (cyc) 00:06:01.592 total_run_count: 302000 00:06:01.592 tsc_hz: 2200000000 (cyc) 00:06:01.592 ====================================== 00:06:01.592 poller_cost: 7314 (cyc), 3324 (nsec) 00:06:01.592 00:06:01.592 real 0m1.287s 00:06:01.592 user 0m1.147s 00:06:01.592 sys 0m0.033s 00:06:01.592 14:22:40 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.592 14:22:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.592 ************************************ 00:06:01.592 END TEST thread_poller_perf 00:06:01.592 ************************************ 00:06:01.592 14:22:40 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:01.592 14:22:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:01.592 14:22:40 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:01.592 14:22:40 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.593 14:22:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.593 ************************************ 00:06:01.593 START TEST thread_poller_perf 00:06:01.593 ************************************ 00:06:01.593 14:22:40 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:01.593 [2024-07-15 14:22:40.816956] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:01.593 [2024-07-15 14:22:40.817055] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63588 ] 00:06:01.593 [2024-07-15 14:22:40.954745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.593 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:01.593 [2024-07-15 14:22:41.015329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.525 ====================================== 00:06:02.525 busy:2202138252 (cyc) 00:06:02.525 total_run_count: 4082000 00:06:02.525 tsc_hz: 2200000000 (cyc) 00:06:02.525 ====================================== 00:06:02.525 poller_cost: 539 (cyc), 245 (nsec) 00:06:02.525 00:06:02.525 real 0m1.292s 00:06:02.525 user 0m1.144s 00:06:02.525 sys 0m0.041s 00:06:02.525 14:22:42 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.525 ************************************ 00:06:02.525 END TEST thread_poller_perf 00:06:02.525 ************************************ 00:06:02.525 14:22:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.781 14:22:42 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:02.781 14:22:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:02.781 ************************************ 00:06:02.781 END TEST thread 00:06:02.781 ************************************ 00:06:02.781 00:06:02.781 real 0m2.759s 00:06:02.781 user 0m2.363s 00:06:02.781 sys 0m0.180s 00:06:02.781 14:22:42 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.782 14:22:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.782 14:22:42 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.782 14:22:42 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:02.782 14:22:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.782 14:22:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.782 14:22:42 -- common/autotest_common.sh@10 -- # set +x 00:06:02.782 ************************************ 00:06:02.782 START TEST accel 00:06:02.782 ************************************ 00:06:02.782 14:22:42 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:02.782 * Looking for test storage... 00:06:02.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:02.782 14:22:42 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:02.782 14:22:42 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:02.782 14:22:42 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.782 14:22:42 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63663 00:06:02.782 14:22:42 accel -- accel/accel.sh@63 -- # waitforlisten 63663 00:06:02.782 14:22:42 accel -- common/autotest_common.sh@829 -- # '[' -z 63663 ']' 00:06:02.782 14:22:42 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.782 14:22:42 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.782 14:22:42 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:02.782 14:22:42 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.782 14:22:42 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.782 14:22:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.782 14:22:42 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:02.782 14:22:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.782 14:22:42 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.782 14:22:42 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.782 14:22:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.782 14:22:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.782 14:22:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:02.782 14:22:42 accel -- accel/accel.sh@41 -- # jq -r . 00:06:02.782 [2024-07-15 14:22:42.326582] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:02.782 [2024-07-15 14:22:42.326681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63663 ] 00:06:03.039 [2024-07-15 14:22:42.466454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.039 [2024-07-15 14:22:42.535531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.973 14:22:43 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.973 14:22:43 accel -- common/autotest_common.sh@862 -- # return 0 00:06:03.973 14:22:43 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:03.973 14:22:43 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:03.973 14:22:43 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:03.973 14:22:43 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:03.973 14:22:43 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:03.973 14:22:43 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:03.973 14:22:43 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:03.973 14:22:43 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.973 14:22:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.973 14:22:43 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.973 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.973 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.973 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.973 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.973 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.973 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.973 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.973 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.973 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.973 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.973 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.973 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.973 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.973 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.973 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.973 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.973 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.973 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.973 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.973 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.974 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.974 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.974 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.974 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.974 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.974 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.974 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.974 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.974 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.974 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.974 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.974 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.974 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.974 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.974 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.974 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.974 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.974 14:22:43 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:03.974 14:22:43 accel -- accel/accel.sh@72 -- # IFS== 00:06:03.974 14:22:43 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:03.974 14:22:43 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:03.974 14:22:43 accel -- accel/accel.sh@75 -- # killprocess 63663 00:06:03.974 14:22:43 accel -- common/autotest_common.sh@948 -- # '[' -z 63663 ']' 00:06:03.974 14:22:43 accel -- common/autotest_common.sh@952 -- # kill -0 63663 00:06:03.974 14:22:43 accel -- common/autotest_common.sh@953 -- # uname 00:06:03.974 14:22:43 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.974 14:22:43 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63663 00:06:03.974 14:22:43 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.974 14:22:43 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.974 killing process with pid 63663 00:06:03.974 14:22:43 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63663' 00:06:03.974 14:22:43 accel -- common/autotest_common.sh@967 -- # kill 63663 00:06:03.974 14:22:43 accel -- common/autotest_common.sh@972 -- # wait 63663 00:06:04.232 14:22:43 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:04.232 14:22:43 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:04.232 14:22:43 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:04.232 14:22:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.232 14:22:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.232 14:22:43 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:04.232 14:22:43 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:04.232 14:22:43 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:04.232 14:22:43 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.232 14:22:43 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.232 14:22:43 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.232 14:22:43 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.232 14:22:43 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.232 14:22:43 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:04.232 14:22:43 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:04.232 14:22:43 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.232 14:22:43 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:04.232 14:22:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.232 14:22:43 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:04.232 14:22:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:04.232 14:22:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.232 14:22:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.232 ************************************ 00:06:04.232 START TEST accel_missing_filename 00:06:04.232 ************************************ 00:06:04.232 14:22:43 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:04.232 14:22:43 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:04.232 14:22:43 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:04.232 14:22:43 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:04.232 14:22:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.232 14:22:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:04.232 14:22:43 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.232 14:22:43 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:04.232 14:22:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:04.232 14:22:43 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:04.232 14:22:43 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.232 14:22:43 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.232 14:22:43 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.232 14:22:43 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.232 14:22:43 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.232 14:22:43 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:04.232 14:22:43 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:04.232 [2024-07-15 14:22:43.779954] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:04.232 [2024-07-15 14:22:43.780043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63731 ] 00:06:04.490 [2024-07-15 14:22:43.916311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.490 [2024-07-15 14:22:43.992647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.490 [2024-07-15 14:22:44.025822] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.490 [2024-07-15 14:22:44.068269] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:04.749 A filename is required. 00:06:04.749 14:22:44 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:04.749 14:22:44 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.749 14:22:44 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:04.749 14:22:44 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:04.749 14:22:44 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:04.749 14:22:44 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.749 00:06:04.749 real 0m0.398s 00:06:04.749 user 0m0.261s 00:06:04.749 sys 0m0.084s 00:06:04.749 14:22:44 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.749 14:22:44 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:04.749 ************************************ 00:06:04.749 END TEST accel_missing_filename 00:06:04.749 ************************************ 00:06:04.749 14:22:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.749 14:22:44 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:04.749 14:22:44 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:04.749 14:22:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.749 14:22:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.749 ************************************ 00:06:04.749 START TEST accel_compress_verify 00:06:04.749 ************************************ 00:06:04.749 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:04.749 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:04.749 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:04.749 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:04.749 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.749 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:04.749 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.749 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:04.749 14:22:44 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:04.749 14:22:44 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:04.749 14:22:44 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.749 14:22:44 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.749 14:22:44 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.749 14:22:44 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.749 14:22:44 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.749 14:22:44 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:04.749 14:22:44 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:04.749 [2024-07-15 14:22:44.235565] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:04.749 [2024-07-15 14:22:44.235690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63751 ] 00:06:05.007 [2024-07-15 14:22:44.383639] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.007 [2024-07-15 14:22:44.451890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.007 [2024-07-15 14:22:44.485068] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.007 [2024-07-15 14:22:44.526672] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:05.265 00:06:05.265 Compression does not support the verify option, aborting. 00:06:05.265 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:05.265 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.265 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:05.265 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:05.265 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:05.265 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.265 00:06:05.265 real 0m0.403s 00:06:05.265 user 0m0.282s 00:06:05.265 sys 0m0.085s 00:06:05.265 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.265 14:22:44 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:05.265 ************************************ 00:06:05.265 END TEST accel_compress_verify 00:06:05.265 ************************************ 00:06:05.265 14:22:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.265 14:22:44 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:05.265 14:22:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:05.265 14:22:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.265 14:22:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.265 ************************************ 00:06:05.265 START TEST accel_wrong_workload 00:06:05.265 ************************************ 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:05.266 14:22:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:05.266 14:22:44 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:05.266 14:22:44 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.266 14:22:44 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.266 14:22:44 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.266 14:22:44 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.266 14:22:44 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.266 14:22:44 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:05.266 14:22:44 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:05.266 Unsupported workload type: foobar 00:06:05.266 [2024-07-15 14:22:44.677155] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:05.266 accel_perf options: 00:06:05.266 [-h help message] 00:06:05.266 [-q queue depth per core] 00:06:05.266 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:05.266 [-T number of threads per core 00:06:05.266 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:05.266 [-t time in seconds] 00:06:05.266 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:05.266 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:05.266 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:05.266 [-l for compress/decompress workloads, name of uncompressed input file 00:06:05.266 [-S for crc32c workload, use this seed value (default 0) 00:06:05.266 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:05.266 [-f for fill workload, use this BYTE value (default 255) 00:06:05.266 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:05.266 [-y verify result if this switch is on] 00:06:05.266 [-a tasks to allocate per core (default: same value as -q)] 00:06:05.266 Can be used to spread operations across a wider range of memory. 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.266 00:06:05.266 real 0m0.033s 00:06:05.266 user 0m0.015s 00:06:05.266 sys 0m0.018s 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.266 14:22:44 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:05.266 ************************************ 00:06:05.266 END TEST accel_wrong_workload 00:06:05.266 ************************************ 00:06:05.266 14:22:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.266 14:22:44 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:05.266 14:22:44 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:05.266 14:22:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.266 14:22:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.266 ************************************ 00:06:05.266 START TEST accel_negative_buffers 00:06:05.266 ************************************ 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:05.266 14:22:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:05.266 14:22:44 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:05.266 14:22:44 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.266 14:22:44 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.266 14:22:44 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.266 14:22:44 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.266 14:22:44 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.266 14:22:44 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:05.266 14:22:44 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:05.266 -x option must be non-negative. 00:06:05.266 [2024-07-15 14:22:44.746491] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:05.266 accel_perf options: 00:06:05.266 [-h help message] 00:06:05.266 [-q queue depth per core] 00:06:05.266 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:05.266 [-T number of threads per core 00:06:05.266 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:05.266 [-t time in seconds] 00:06:05.266 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:05.266 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:05.266 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:05.266 [-l for compress/decompress workloads, name of uncompressed input file 00:06:05.266 [-S for crc32c workload, use this seed value (default 0) 00:06:05.266 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:05.266 [-f for fill workload, use this BYTE value (default 255) 00:06:05.266 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:05.266 [-y verify result if this switch is on] 00:06:05.266 [-a tasks to allocate per core (default: same value as -q)] 00:06:05.266 Can be used to spread operations across a wider range of memory. 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.266 00:06:05.266 real 0m0.025s 00:06:05.266 user 0m0.011s 00:06:05.266 sys 0m0.014s 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.266 14:22:44 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:05.266 ************************************ 00:06:05.266 END TEST accel_negative_buffers 00:06:05.266 ************************************ 00:06:05.266 14:22:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.266 14:22:44 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:05.266 14:22:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:05.266 14:22:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.266 14:22:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.266 ************************************ 00:06:05.266 START TEST accel_crc32c 00:06:05.266 ************************************ 00:06:05.266 14:22:44 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:05.266 14:22:44 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:05.266 [2024-07-15 14:22:44.822249] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:05.266 [2024-07-15 14:22:44.822333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63816 ] 00:06:05.525 [2024-07-15 14:22:44.962191] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.525 [2024-07-15 14:22:45.034723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.525 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.526 14:22:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.901 14:22:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.901 14:22:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.901 14:22:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.901 14:22:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.901 14:22:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.901 14:22:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.901 14:22:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.901 14:22:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.901 14:22:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.901 14:22:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.901 14:22:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:06.902 14:22:46 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.902 00:06:06.902 real 0m1.390s 00:06:06.902 user 0m1.216s 00:06:06.902 sys 0m0.081s 00:06:06.902 14:22:46 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.902 ************************************ 00:06:06.902 END TEST accel_crc32c 00:06:06.902 ************************************ 00:06:06.902 14:22:46 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:06.902 14:22:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.902 14:22:46 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:06.902 14:22:46 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:06.902 14:22:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.902 14:22:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.902 ************************************ 00:06:06.902 START TEST accel_crc32c_C2 00:06:06.902 ************************************ 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:06.902 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:06.902 [2024-07-15 14:22:46.257022] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:06.902 [2024-07-15 14:22:46.257121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63846 ] 00:06:06.902 [2024-07-15 14:22:46.392596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.902 [2024-07-15 14:22:46.471632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.160 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.161 14:22:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.094 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.094 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.094 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.094 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.094 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.094 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.094 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.094 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.094 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.094 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.094 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.095 00:06:08.095 real 0m1.381s 00:06:08.095 user 0m1.214s 00:06:08.095 sys 0m0.071s 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.095 14:22:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:08.095 ************************************ 00:06:08.095 END TEST accel_crc32c_C2 00:06:08.095 ************************************ 00:06:08.095 14:22:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.095 14:22:47 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:08.095 14:22:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:08.095 14:22:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.095 14:22:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.095 ************************************ 00:06:08.095 START TEST accel_copy 00:06:08.095 ************************************ 00:06:08.095 14:22:47 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:08.095 14:22:47 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:08.095 [2024-07-15 14:22:47.687622] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:08.095 [2024-07-15 14:22:47.687714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63881 ] 00:06:08.354 [2024-07-15 14:22:47.824458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.354 [2024-07-15 14:22:47.893563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.354 14:22:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:09.741 14:22:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.741 00:06:09.741 real 0m1.377s 00:06:09.741 user 0m0.012s 00:06:09.741 sys 0m0.002s 00:06:09.741 14:22:49 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.741 ************************************ 00:06:09.741 END TEST accel_copy 00:06:09.741 ************************************ 00:06:09.741 14:22:49 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:09.741 14:22:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.741 14:22:49 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.741 14:22:49 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:09.741 14:22:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.741 14:22:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.741 ************************************ 00:06:09.741 START TEST accel_fill 00:06:09.741 ************************************ 00:06:09.741 14:22:49 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:09.741 14:22:49 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:09.741 [2024-07-15 14:22:49.118293] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:09.741 [2024-07-15 14:22:49.118439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63915 ] 00:06:09.741 [2024-07-15 14:22:49.255665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.741 [2024-07-15 14:22:49.325574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.062 14:22:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:10.997 14:22:50 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.997 00:06:10.997 real 0m1.386s 00:06:10.997 user 0m0.015s 00:06:10.997 sys 0m0.004s 00:06:10.997 14:22:50 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.997 14:22:50 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:10.997 ************************************ 00:06:10.997 END TEST accel_fill 00:06:10.997 ************************************ 00:06:10.997 14:22:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.997 14:22:50 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:10.997 14:22:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:10.997 14:22:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.997 14:22:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.997 ************************************ 00:06:10.997 START TEST accel_copy_crc32c 00:06:10.997 ************************************ 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:10.997 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:10.997 [2024-07-15 14:22:50.544395] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:10.997 [2024-07-15 14:22:50.544480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63951 ] 00:06:11.255 [2024-07-15 14:22:50.676122] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.255 [2024-07-15 14:22:50.734006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.255 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.255 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.255 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.255 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.255 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.255 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.256 14:22:50 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.658 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.659 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.659 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.659 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:12.659 14:22:51 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.659 00:06:12.659 real 0m1.359s 00:06:12.659 user 0m1.194s 00:06:12.659 sys 0m0.074s 00:06:12.659 14:22:51 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.659 14:22:51 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:12.659 ************************************ 00:06:12.659 END TEST accel_copy_crc32c 00:06:12.659 ************************************ 00:06:12.659 14:22:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.659 14:22:51 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:12.659 14:22:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:12.659 14:22:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.659 14:22:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.659 ************************************ 00:06:12.659 START TEST accel_copy_crc32c_C2 00:06:12.659 ************************************ 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.659 14:22:51 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:12.659 [2024-07-15 14:22:51.942732] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:12.659 [2024-07-15 14:22:51.942808] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63980 ] 00:06:12.659 [2024-07-15 14:22:52.079059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.659 [2024-07-15 14:22:52.140196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.659 14:22:52 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.035 00:06:14.035 real 0m1.360s 00:06:14.035 user 0m0.015s 00:06:14.035 sys 0m0.002s 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.035 ************************************ 00:06:14.035 END TEST accel_copy_crc32c_C2 00:06:14.035 14:22:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:14.035 ************************************ 00:06:14.035 14:22:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.035 14:22:53 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:14.035 14:22:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:14.035 14:22:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.035 14:22:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.035 ************************************ 00:06:14.035 START TEST accel_dualcast 00:06:14.035 ************************************ 00:06:14.035 14:22:53 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:14.035 [2024-07-15 14:22:53.347485] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:14.035 [2024-07-15 14:22:53.347562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64015 ] 00:06:14.035 [2024-07-15 14:22:53.480008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.035 [2024-07-15 14:22:53.550742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.035 14:22:53 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:15.426 14:22:54 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.426 00:06:15.426 real 0m1.374s 00:06:15.426 user 0m1.205s 00:06:15.426 sys 0m0.079s 00:06:15.426 14:22:54 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.426 14:22:54 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:15.426 ************************************ 00:06:15.426 END TEST accel_dualcast 00:06:15.426 ************************************ 00:06:15.426 14:22:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.427 14:22:54 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:15.427 14:22:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:15.427 14:22:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.427 14:22:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.427 ************************************ 00:06:15.427 START TEST accel_compare 00:06:15.427 ************************************ 00:06:15.427 14:22:54 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:15.427 14:22:54 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:15.427 [2024-07-15 14:22:54.766485] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:15.427 [2024-07-15 14:22:54.766582] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64052 ] 00:06:15.428 [2024-07-15 14:22:54.902759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.428 [2024-07-15 14:22:54.962661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:15.428 14:22:54 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.429 14:22:54 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:15.429 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.429 14:22:54 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.429 14:22:55 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:16.854 14:22:56 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.854 00:06:16.854 real 0m1.368s 00:06:16.854 user 0m1.201s 00:06:16.854 sys 0m0.075s 00:06:16.854 14:22:56 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.854 ************************************ 00:06:16.854 END TEST accel_compare 00:06:16.854 ************************************ 00:06:16.854 14:22:56 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:16.854 14:22:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.854 14:22:56 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:16.854 14:22:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:16.854 14:22:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.854 14:22:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.854 ************************************ 00:06:16.854 START TEST accel_xor 00:06:16.854 ************************************ 00:06:16.854 14:22:56 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:16.854 [2024-07-15 14:22:56.185048] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:16.854 [2024-07-15 14:22:56.185142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64081 ] 00:06:16.854 [2024-07-15 14:22:56.321537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.854 [2024-07-15 14:22:56.381372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.854 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.855 14:22:56 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.231 00:06:18.231 real 0m1.371s 00:06:18.231 user 0m1.209s 00:06:18.231 sys 0m0.070s 00:06:18.231 14:22:57 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.231 14:22:57 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:18.231 ************************************ 00:06:18.231 END TEST accel_xor 00:06:18.231 ************************************ 00:06:18.231 14:22:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.231 14:22:57 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:18.231 14:22:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:18.231 14:22:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.231 14:22:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.231 ************************************ 00:06:18.231 START TEST accel_xor 00:06:18.231 ************************************ 00:06:18.231 14:22:57 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:18.231 14:22:57 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:18.231 [2024-07-15 14:22:57.602221] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:18.231 [2024-07-15 14:22:57.602487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64121 ] 00:06:18.231 [2024-07-15 14:22:57.744645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.231 [2024-07-15 14:22:57.805966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.489 14:22:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.423 ************************************ 00:06:19.423 END TEST accel_xor 00:06:19.423 ************************************ 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:19.423 14:22:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.423 00:06:19.423 real 0m1.392s 00:06:19.423 user 0m1.210s 00:06:19.423 sys 0m0.088s 00:06:19.423 14:22:58 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.423 14:22:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:19.423 14:22:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.423 14:22:59 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:19.423 14:22:59 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:19.423 14:22:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.423 14:22:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.423 ************************************ 00:06:19.423 START TEST accel_dif_verify 00:06:19.423 ************************************ 00:06:19.423 14:22:59 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:19.423 14:22:59 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:19.423 14:22:59 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:19.680 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.680 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.680 14:22:59 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:19.680 14:22:59 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:19.680 14:22:59 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:19.680 14:22:59 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.680 14:22:59 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.680 14:22:59 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.680 14:22:59 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.680 14:22:59 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.680 14:22:59 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:19.680 14:22:59 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:19.680 [2024-07-15 14:22:59.042005] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:19.680 [2024-07-15 14:22:59.042102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64150 ] 00:06:19.680 [2024-07-15 14:22:59.178206] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.680 [2024-07-15 14:22:59.239307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.938 14:22:59 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:20.869 14:23:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.869 00:06:20.869 real 0m1.383s 00:06:20.869 user 0m1.218s 00:06:20.869 sys 0m0.073s 00:06:20.869 14:23:00 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.869 ************************************ 00:06:20.869 END TEST accel_dif_verify 00:06:20.869 ************************************ 00:06:20.869 14:23:00 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:20.869 14:23:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:20.869 14:23:00 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:20.869 14:23:00 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:20.869 14:23:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.869 14:23:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.869 ************************************ 00:06:20.869 START TEST accel_dif_generate 00:06:20.869 ************************************ 00:06:20.869 14:23:00 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:20.869 14:23:00 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:21.128 [2024-07-15 14:23:00.471596] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:21.128 [2024-07-15 14:23:00.471776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64179 ] 00:06:21.128 [2024-07-15 14:23:00.614106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.128 [2024-07-15 14:23:00.685427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.128 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.128 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.128 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.128 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.128 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.128 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.128 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.128 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.386 14:23:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:22.358 14:23:01 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.358 00:06:22.358 real 0m1.395s 00:06:22.358 user 0m1.225s 00:06:22.358 sys 0m0.077s 00:06:22.358 14:23:01 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.358 14:23:01 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:22.358 ************************************ 00:06:22.358 END TEST accel_dif_generate 00:06:22.358 ************************************ 00:06:22.358 14:23:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.358 14:23:01 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:22.358 14:23:01 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:22.358 14:23:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.358 14:23:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.358 ************************************ 00:06:22.358 START TEST accel_dif_generate_copy 00:06:22.358 ************************************ 00:06:22.358 14:23:01 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:22.358 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:22.358 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:22.358 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.358 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.358 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:22.358 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:22.358 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:22.358 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.359 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.359 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.359 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.359 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.359 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:22.359 14:23:01 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:22.359 [2024-07-15 14:23:01.916344] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:22.359 [2024-07-15 14:23:01.916442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64219 ] 00:06:22.617 [2024-07-15 14:23:02.053429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.617 [2024-07-15 14:23:02.126207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.617 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.617 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.617 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.617 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.617 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.617 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.617 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.617 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.617 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:22.617 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.618 14:23:02 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.995 00:06:23.995 real 0m1.394s 00:06:23.995 user 0m1.223s 00:06:23.995 sys 0m0.075s 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.995 ************************************ 00:06:23.995 END TEST accel_dif_generate_copy 00:06:23.995 ************************************ 00:06:23.995 14:23:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:23.995 14:23:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.995 14:23:03 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:23.995 14:23:03 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:23.995 14:23:03 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:23.995 14:23:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.995 14:23:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.995 ************************************ 00:06:23.995 START TEST accel_comp 00:06:23.995 ************************************ 00:06:23.995 14:23:03 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:23.995 14:23:03 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:23.995 [2024-07-15 14:23:03.361129] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:23.995 [2024-07-15 14:23:03.361244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64248 ] 00:06:23.995 [2024-07-15 14:23:03.497889] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.996 [2024-07-15 14:23:03.556732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.254 14:23:03 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.189 14:23:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.189 14:23:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.189 14:23:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.189 14:23:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.189 14:23:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.189 14:23:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.189 14:23:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.189 14:23:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.189 14:23:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:25.190 ************************************ 00:06:25.190 END TEST accel_comp 00:06:25.190 ************************************ 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:25.190 14:23:04 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.190 00:06:25.190 real 0m1.376s 00:06:25.190 user 0m1.203s 00:06:25.190 sys 0m0.076s 00:06:25.190 14:23:04 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.190 14:23:04 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:25.190 14:23:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.190 14:23:04 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.190 14:23:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:25.190 14:23:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.190 14:23:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.190 ************************************ 00:06:25.190 START TEST accel_decomp 00:06:25.190 ************************************ 00:06:25.190 14:23:04 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:25.190 14:23:04 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:25.449 [2024-07-15 14:23:04.789796] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:25.449 [2024-07-15 14:23:04.789879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64287 ] 00:06:25.449 [2024-07-15 14:23:04.921827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.449 [2024-07-15 14:23:04.995301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.449 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.707 14:23:05 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.641 ************************************ 00:06:26.641 END TEST accel_decomp 00:06:26.641 ************************************ 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:26.641 14:23:06 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.641 00:06:26.641 real 0m1.397s 00:06:26.641 user 0m1.218s 00:06:26.641 sys 0m0.084s 00:06:26.641 14:23:06 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.641 14:23:06 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:26.641 14:23:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.641 14:23:06 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:26.641 14:23:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:26.641 14:23:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.641 14:23:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.641 ************************************ 00:06:26.641 START TEST accel_decomp_full 00:06:26.641 ************************************ 00:06:26.641 14:23:06 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:26.641 14:23:06 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:26.641 14:23:06 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:26.641 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.641 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.641 14:23:06 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:26.641 14:23:06 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:26.642 14:23:06 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:26.642 14:23:06 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.642 14:23:06 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.642 14:23:06 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.642 14:23:06 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.642 14:23:06 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.642 14:23:06 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:26.642 14:23:06 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:26.901 [2024-07-15 14:23:06.238867] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:26.901 [2024-07-15 14:23:06.238990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64317 ] 00:06:26.901 [2024-07-15 14:23:06.384668] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.901 [2024-07-15 14:23:06.446156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:26.901 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.902 14:23:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:28.279 14:23:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:28.280 14:23:07 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.280 00:06:28.280 real 0m1.400s 00:06:28.280 user 0m1.232s 00:06:28.280 sys 0m0.072s 00:06:28.280 14:23:07 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.280 ************************************ 00:06:28.280 END TEST accel_decomp_full 00:06:28.280 ************************************ 00:06:28.280 14:23:07 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:28.280 14:23:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.280 14:23:07 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:28.280 14:23:07 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:28.280 14:23:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.280 14:23:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.280 ************************************ 00:06:28.280 START TEST accel_decomp_mcore 00:06:28.280 ************************************ 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:28.280 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:28.280 [2024-07-15 14:23:07.678773] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:28.280 [2024-07-15 14:23:07.678873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64346 ] 00:06:28.280 [2024-07-15 14:23:07.817300] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.539 [2024-07-15 14:23:07.893965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.539 [2024-07-15 14:23:07.894039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.539 [2024-07-15 14:23:07.894145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.539 [2024-07-15 14:23:07.894142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.539 14:23:07 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.474 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.733 ************************************ 00:06:29.733 END TEST accel_decomp_mcore 00:06:29.733 ************************************ 00:06:29.733 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.734 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:29.734 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.734 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.734 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.734 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.734 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.734 14:23:09 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.734 00:06:29.734 real 0m1.412s 00:06:29.734 user 0m4.445s 00:06:29.734 sys 0m0.101s 00:06:29.734 14:23:09 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.734 14:23:09 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:29.734 14:23:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.734 14:23:09 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.734 14:23:09 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:29.734 14:23:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.734 14:23:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.734 ************************************ 00:06:29.734 START TEST accel_decomp_full_mcore 00:06:29.734 ************************************ 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:29.734 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:29.734 [2024-07-15 14:23:09.144144] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:29.734 [2024-07-15 14:23:09.144231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64389 ] 00:06:29.734 [2024-07-15 14:23:09.281528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.992 [2024-07-15 14:23:09.356825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.992 [2024-07-15 14:23:09.356914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.992 [2024-07-15 14:23:09.357037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.992 [2024-07-15 14:23:09.357045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.992 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.993 14:23:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.393 00:06:31.393 real 0m1.426s 00:06:31.393 user 0m4.517s 00:06:31.393 sys 0m0.094s 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.393 14:23:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:31.393 ************************************ 00:06:31.393 END TEST accel_decomp_full_mcore 00:06:31.393 ************************************ 00:06:31.393 14:23:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.393 14:23:10 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:31.393 14:23:10 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:31.393 14:23:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.393 14:23:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.393 ************************************ 00:06:31.393 START TEST accel_decomp_mthread 00:06:31.393 ************************************ 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:31.393 [2024-07-15 14:23:10.605351] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:31.393 [2024-07-15 14:23:10.605466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64421 ] 00:06:31.393 [2024-07-15 14:23:10.744368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.393 [2024-07-15 14:23:10.825899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.393 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.394 14:23:10 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.769 00:06:32.769 real 0m1.408s 00:06:32.769 user 0m1.223s 00:06:32.769 sys 0m0.086s 00:06:32.769 ************************************ 00:06:32.769 END TEST accel_decomp_mthread 00:06:32.769 ************************************ 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.769 14:23:11 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:32.769 14:23:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.769 14:23:12 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:32.769 14:23:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:32.769 14:23:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.769 14:23:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.769 ************************************ 00:06:32.769 START TEST accel_decomp_full_mthread 00:06:32.769 ************************************ 00:06:32.769 14:23:12 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:32.769 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:32.769 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:32.769 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.769 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.769 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:32.769 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:32.769 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:32.769 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:32.770 [2024-07-15 14:23:12.068856] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:32.770 [2024-07-15 14:23:12.069013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64456 ] 00:06:32.770 [2024-07-15 14:23:12.212437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.770 [2024-07-15 14:23:12.284013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.770 14:23:12 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.157 00:06:34.157 real 0m1.426s 00:06:34.157 user 0m1.252s 00:06:34.157 sys 0m0.083s 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.157 14:23:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:34.157 ************************************ 00:06:34.157 END TEST accel_decomp_full_mthread 00:06:34.157 ************************************ 00:06:34.157 14:23:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.157 14:23:13 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:34.157 14:23:13 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:34.157 14:23:13 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:34.157 14:23:13 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:34.157 14:23:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.157 14:23:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.157 14:23:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.157 14:23:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.157 14:23:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.157 14:23:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.157 14:23:13 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.157 14:23:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:34.157 14:23:13 accel -- accel/accel.sh@41 -- # jq -r . 00:06:34.157 ************************************ 00:06:34.157 START TEST accel_dif_functional_tests 00:06:34.157 ************************************ 00:06:34.157 14:23:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:34.157 [2024-07-15 14:23:13.564915] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:34.157 [2024-07-15 14:23:13.565007] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64491 ] 00:06:34.157 [2024-07-15 14:23:13.701154] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.416 [2024-07-15 14:23:13.771367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.416 [2024-07-15 14:23:13.771444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.416 [2024-07-15 14:23:13.771445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.416 00:06:34.416 00:06:34.416 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.416 http://cunit.sourceforge.net/ 00:06:34.416 00:06:34.416 00:06:34.416 Suite: accel_dif 00:06:34.416 Test: verify: DIF generated, GUARD check ...passed 00:06:34.416 Test: verify: DIF generated, APPTAG check ...passed 00:06:34.416 Test: verify: DIF generated, REFTAG check ...passed 00:06:34.416 Test: verify: DIF not generated, GUARD check ...passed 00:06:34.416 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 14:23:13.826581] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:34.416 [2024-07-15 14:23:13.826677] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:34.416 passed 00:06:34.416 Test: verify: DIF not generated, REFTAG check ...passed 00:06:34.416 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:34.416 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 14:23:13.826743] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:34.416 [2024-07-15 14:23:13.826812] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:34.416 passed 00:06:34.416 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:34.416 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:34.416 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:34.416 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 14:23:13.827278] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:34.416 passed 00:06:34.416 Test: verify copy: DIF generated, GUARD check ...passed 00:06:34.416 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:34.416 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:34.416 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:34.416 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 14:23:13.827627] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:34.416 [2024-07-15 14:23:13.827676] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:34.416 passed 00:06:34.416 Test: verify copy: DIF not generated, REFTAG check ...passed 00:06:34.416 Test: generate copy: DIF generated, GUARD check ...passed 00:06:34.416 Test: generate copy: DIF generated, APTTAG check ...[2024-07-15 14:23:13.827728] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:34.416 passed 00:06:34.416 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:34.416 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:34.416 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:34.416 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:34.416 Test: generate copy: iovecs-len validate ...[2024-07-15 14:23:13.828365] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:34.416 passed 00:06:34.416 Test: generate copy: buffer alignment validate ...passed 00:06:34.416 00:06:34.416 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.416 suites 1 1 n/a 0 0 00:06:34.416 tests 26 26 26 0 0 00:06:34.416 asserts 115 115 115 0 n/a 00:06:34.416 00:06:34.416 Elapsed time = 0.005 seconds 00:06:34.416 00:06:34.416 real 0m0.481s 00:06:34.416 user 0m0.569s 00:06:34.416 sys 0m0.104s 00:06:34.416 14:23:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.416 14:23:13 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:34.416 ************************************ 00:06:34.416 END TEST accel_dif_functional_tests 00:06:34.416 ************************************ 00:06:34.674 14:23:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.675 00:06:34.675 real 0m31.849s 00:06:34.675 user 0m34.202s 00:06:34.675 sys 0m2.918s 00:06:34.675 14:23:14 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.675 14:23:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.675 ************************************ 00:06:34.675 END TEST accel 00:06:34.675 ************************************ 00:06:34.675 14:23:14 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.675 14:23:14 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:34.675 14:23:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.675 14:23:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.675 14:23:14 -- common/autotest_common.sh@10 -- # set +x 00:06:34.675 ************************************ 00:06:34.675 START TEST accel_rpc 00:06:34.675 ************************************ 00:06:34.675 14:23:14 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:34.675 * Looking for test storage... 00:06:34.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:34.675 14:23:14 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:34.675 14:23:14 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64556 00:06:34.675 14:23:14 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:34.675 14:23:14 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64556 00:06:34.675 14:23:14 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64556 ']' 00:06:34.675 14:23:14 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.675 14:23:14 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.675 14:23:14 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.675 14:23:14 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.675 14:23:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.675 [2024-07-15 14:23:14.214392] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:34.675 [2024-07-15 14:23:14.214490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64556 ] 00:06:34.934 [2024-07-15 14:23:14.351595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.934 [2024-07-15 14:23:14.419375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.934 14:23:14 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.934 14:23:14 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:34.934 14:23:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:34.934 14:23:14 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:34.934 14:23:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:34.934 14:23:14 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:34.934 14:23:14 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:34.934 14:23:14 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.934 14:23:14 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.934 14:23:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.934 ************************************ 00:06:34.934 START TEST accel_assign_opcode 00:06:34.934 ************************************ 00:06:34.934 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:34.934 14:23:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:34.934 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.934 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:34.934 [2024-07-15 14:23:14.499866] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:34.934 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.934 14:23:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:34.934 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.934 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:34.934 [2024-07-15 14:23:14.507842] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:34.934 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.934 14:23:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:34.934 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.934 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:35.192 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.192 14:23:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:35.192 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.192 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:35.192 14:23:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:35.192 14:23:14 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:35.192 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.192 software 00:06:35.192 00:06:35.192 real 0m0.196s 00:06:35.192 user 0m0.043s 00:06:35.192 sys 0m0.012s 00:06:35.192 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.192 14:23:14 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:35.192 ************************************ 00:06:35.192 END TEST accel_assign_opcode 00:06:35.192 ************************************ 00:06:35.192 14:23:14 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:35.192 14:23:14 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64556 00:06:35.192 14:23:14 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64556 ']' 00:06:35.192 14:23:14 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64556 00:06:35.192 14:23:14 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:35.192 14:23:14 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.192 14:23:14 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64556 00:06:35.192 14:23:14 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.192 killing process with pid 64556 00:06:35.192 14:23:14 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.192 14:23:14 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64556' 00:06:35.192 14:23:14 accel_rpc -- common/autotest_common.sh@967 -- # kill 64556 00:06:35.192 14:23:14 accel_rpc -- common/autotest_common.sh@972 -- # wait 64556 00:06:35.449 00:06:35.449 real 0m0.912s 00:06:35.449 user 0m0.947s 00:06:35.449 sys 0m0.294s 00:06:35.449 14:23:14 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.449 14:23:14 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.449 ************************************ 00:06:35.449 END TEST accel_rpc 00:06:35.449 ************************************ 00:06:35.449 14:23:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:35.449 14:23:15 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:35.449 14:23:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.449 14:23:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.449 14:23:15 -- common/autotest_common.sh@10 -- # set +x 00:06:35.449 ************************************ 00:06:35.449 START TEST app_cmdline 00:06:35.449 ************************************ 00:06:35.449 14:23:15 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:35.707 * Looking for test storage... 00:06:35.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:35.707 14:23:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:35.707 14:23:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64648 00:06:35.707 14:23:15 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:35.707 14:23:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64648 00:06:35.707 14:23:15 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64648 ']' 00:06:35.707 14:23:15 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.707 14:23:15 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.707 14:23:15 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.707 14:23:15 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.707 14:23:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.707 [2024-07-15 14:23:15.181368] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:35.707 [2024-07-15 14:23:15.181475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64648 ] 00:06:35.965 [2024-07-15 14:23:15.317940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.965 [2024-07-15 14:23:15.404471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.900 14:23:16 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.900 14:23:16 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:36.900 14:23:16 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:37.158 { 00:06:37.158 "fields": { 00:06:37.158 "commit": "72fc6988f", 00:06:37.158 "major": 24, 00:06:37.158 "minor": 9, 00:06:37.158 "patch": 0, 00:06:37.158 "suffix": "-pre" 00:06:37.158 }, 00:06:37.158 "version": "SPDK v24.09-pre git sha1 72fc6988f" 00:06:37.158 } 00:06:37.158 14:23:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:37.158 14:23:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:37.158 14:23:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:37.158 14:23:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:37.158 14:23:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:37.158 14:23:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.158 14:23:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.158 14:23:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:37.158 14:23:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:37.158 14:23:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:37.158 14:23:16 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.416 2024/07/15 14:23:16 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:37.416 request: 00:06:37.416 { 00:06:37.416 "method": "env_dpdk_get_mem_stats", 00:06:37.416 "params": {} 00:06:37.416 } 00:06:37.416 Got JSON-RPC error response 00:06:37.416 GoRPCClient: error on JSON-RPC call 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.416 14:23:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64648 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64648 ']' 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64648 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64648 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.416 killing process with pid 64648 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64648' 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@967 -- # kill 64648 00:06:37.416 14:23:16 app_cmdline -- common/autotest_common.sh@972 -- # wait 64648 00:06:37.673 00:06:37.673 real 0m2.135s 00:06:37.673 user 0m2.903s 00:06:37.673 sys 0m0.410s 00:06:37.673 14:23:17 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.673 14:23:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.673 ************************************ 00:06:37.673 END TEST app_cmdline 00:06:37.673 ************************************ 00:06:37.673 14:23:17 -- common/autotest_common.sh@1142 -- # return 0 00:06:37.673 14:23:17 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:37.673 14:23:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.673 14:23:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.673 14:23:17 -- common/autotest_common.sh@10 -- # set +x 00:06:37.673 ************************************ 00:06:37.673 START TEST version 00:06:37.673 ************************************ 00:06:37.673 14:23:17 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:37.932 * Looking for test storage... 00:06:37.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:37.932 14:23:17 version -- app/version.sh@17 -- # get_header_version major 00:06:37.932 14:23:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:37.932 14:23:17 version -- app/version.sh@14 -- # cut -f2 00:06:37.932 14:23:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.932 14:23:17 version -- app/version.sh@17 -- # major=24 00:06:37.932 14:23:17 version -- app/version.sh@18 -- # get_header_version minor 00:06:37.932 14:23:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:37.932 14:23:17 version -- app/version.sh@14 -- # cut -f2 00:06:37.932 14:23:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.932 14:23:17 version -- app/version.sh@18 -- # minor=9 00:06:37.932 14:23:17 version -- app/version.sh@19 -- # get_header_version patch 00:06:37.932 14:23:17 version -- app/version.sh@14 -- # cut -f2 00:06:37.932 14:23:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:37.932 14:23:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.932 14:23:17 version -- app/version.sh@19 -- # patch=0 00:06:37.932 14:23:17 version -- app/version.sh@20 -- # get_header_version suffix 00:06:37.932 14:23:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:37.932 14:23:17 version -- app/version.sh@14 -- # cut -f2 00:06:37.932 14:23:17 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.932 14:23:17 version -- app/version.sh@20 -- # suffix=-pre 00:06:37.932 14:23:17 version -- app/version.sh@22 -- # version=24.9 00:06:37.932 14:23:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:37.932 14:23:17 version -- app/version.sh@28 -- # version=24.9rc0 00:06:37.932 14:23:17 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:37.932 14:23:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:37.932 14:23:17 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:37.932 14:23:17 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:37.932 00:06:37.932 real 0m0.148s 00:06:37.932 user 0m0.083s 00:06:37.932 sys 0m0.092s 00:06:37.932 14:23:17 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.932 14:23:17 version -- common/autotest_common.sh@10 -- # set +x 00:06:37.932 ************************************ 00:06:37.932 END TEST version 00:06:37.932 ************************************ 00:06:37.932 14:23:17 -- common/autotest_common.sh@1142 -- # return 0 00:06:37.932 14:23:17 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:37.932 14:23:17 -- spdk/autotest.sh@198 -- # uname -s 00:06:37.932 14:23:17 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:37.932 14:23:17 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:37.932 14:23:17 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:37.932 14:23:17 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:37.932 14:23:17 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:37.932 14:23:17 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:37.932 14:23:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:37.932 14:23:17 -- common/autotest_common.sh@10 -- # set +x 00:06:37.932 14:23:17 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:37.932 14:23:17 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:37.932 14:23:17 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:37.932 14:23:17 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:37.932 14:23:17 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:37.932 14:23:17 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:37.932 14:23:17 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:37.932 14:23:17 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:37.932 14:23:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.932 14:23:17 -- common/autotest_common.sh@10 -- # set +x 00:06:37.932 ************************************ 00:06:37.932 START TEST nvmf_tcp 00:06:37.932 ************************************ 00:06:37.932 14:23:17 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:37.932 * Looking for test storage... 00:06:37.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:37.932 14:23:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:38.190 14:23:17 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.190 14:23:17 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.190 14:23:17 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.190 14:23:17 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.190 14:23:17 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.190 14:23:17 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.190 14:23:17 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:38.190 14:23:17 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:38.190 14:23:17 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:38.190 14:23:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:38.190 14:23:17 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:38.190 14:23:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:38.190 14:23:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.190 14:23:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.190 ************************************ 00:06:38.190 START TEST nvmf_example 00:06:38.190 ************************************ 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:38.190 * Looking for test storage... 00:06:38.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:38.190 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:38.191 Cannot find device "nvmf_init_br" 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:38.191 Cannot find device "nvmf_tgt_br" 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:38.191 Cannot find device "nvmf_tgt_br2" 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:38.191 Cannot find device "nvmf_init_br" 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:38.191 Cannot find device "nvmf_tgt_br" 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:38.191 Cannot find device "nvmf_tgt_br2" 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:38.191 Cannot find device "nvmf_br" 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:38.191 Cannot find device "nvmf_init_if" 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:38.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:38.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:38.191 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:38.449 14:23:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:38.449 14:23:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:38.449 14:23:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:38.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:06:38.449 00:06:38.449 --- 10.0.0.2 ping statistics --- 00:06:38.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.449 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:06:38.449 14:23:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:38.449 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:38.449 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:06:38.449 00:06:38.449 --- 10.0.0.3 ping statistics --- 00:06:38.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.449 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:06:38.449 14:23:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:38.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:06:38.449 00:06:38.449 --- 10.0.0.1 ping statistics --- 00:06:38.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.449 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:06:38.449 14:23:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.449 14:23:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:06:38.449 14:23:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:38.449 14:23:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.449 14:23:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:38.449 14:23:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:38.449 14:23:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.449 14:23:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:38.449 14:23:18 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=65002 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 65002 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 65002 ']' 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.706 14:23:18 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:39.639 14:23:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:51.833 Initializing NVMe Controllers 00:06:51.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:51.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:51.833 Initialization complete. Launching workers. 00:06:51.833 ======================================================== 00:06:51.833 Latency(us) 00:06:51.833 Device Information : IOPS MiB/s Average min max 00:06:51.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14546.25 56.82 4399.37 753.75 22228.04 00:06:51.833 ======================================================== 00:06:51.833 Total : 14546.25 56.82 4399.37 753.75 22228.04 00:06:51.833 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:51.833 rmmod nvme_tcp 00:06:51.833 rmmod nvme_fabrics 00:06:51.833 rmmod nvme_keyring 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 65002 ']' 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 65002 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 65002 ']' 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 65002 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65002 00:06:51.833 killing process with pid 65002 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65002' 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 65002 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 65002 00:06:51.833 nvmf threads initialize successfully 00:06:51.833 bdev subsystem init successfully 00:06:51.833 created a nvmf target service 00:06:51.833 create targets's poll groups done 00:06:51.833 all subsystems of target started 00:06:51.833 nvmf target is running 00:06:51.833 all subsystems of target stopped 00:06:51.833 destroy targets's poll groups done 00:06:51.833 destroyed the nvmf target service 00:06:51.833 bdev subsystem finish successfully 00:06:51.833 nvmf threads destroy successfully 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.833 00:06:51.833 real 0m12.230s 00:06:51.833 user 0m44.111s 00:06:51.833 sys 0m1.933s 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.833 14:23:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:51.833 ************************************ 00:06:51.833 END TEST nvmf_example 00:06:51.833 ************************************ 00:06:51.833 14:23:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:51.833 14:23:29 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:51.833 14:23:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:51.833 14:23:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.833 14:23:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.833 ************************************ 00:06:51.833 START TEST nvmf_filesystem 00:06:51.833 ************************************ 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:51.833 * Looking for test storage... 00:06:51.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:51.833 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:51.834 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:51.834 #define SPDK_CONFIG_H 00:06:51.834 #define SPDK_CONFIG_APPS 1 00:06:51.834 #define SPDK_CONFIG_ARCH native 00:06:51.834 #undef SPDK_CONFIG_ASAN 00:06:51.834 #define SPDK_CONFIG_AVAHI 1 00:06:51.834 #undef SPDK_CONFIG_CET 00:06:51.834 #define SPDK_CONFIG_COVERAGE 1 00:06:51.834 #define SPDK_CONFIG_CROSS_PREFIX 00:06:51.834 #undef SPDK_CONFIG_CRYPTO 00:06:51.834 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:51.834 #undef SPDK_CONFIG_CUSTOMOCF 00:06:51.834 #undef SPDK_CONFIG_DAOS 00:06:51.834 #define SPDK_CONFIG_DAOS_DIR 00:06:51.834 #define SPDK_CONFIG_DEBUG 1 00:06:51.834 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:51.834 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:51.834 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:51.834 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:51.834 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:51.834 #undef SPDK_CONFIG_DPDK_UADK 00:06:51.834 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:51.834 #define SPDK_CONFIG_EXAMPLES 1 00:06:51.834 #undef SPDK_CONFIG_FC 00:06:51.834 #define SPDK_CONFIG_FC_PATH 00:06:51.834 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:51.834 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:51.834 #undef SPDK_CONFIG_FUSE 00:06:51.834 #undef SPDK_CONFIG_FUZZER 00:06:51.834 #define SPDK_CONFIG_FUZZER_LIB 00:06:51.834 #define SPDK_CONFIG_GOLANG 1 00:06:51.834 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:51.834 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:51.834 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:51.834 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:51.834 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:51.834 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:51.834 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:51.834 #define SPDK_CONFIG_IDXD 1 00:06:51.834 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:51.834 #undef SPDK_CONFIG_IPSEC_MB 00:06:51.834 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:51.834 #define SPDK_CONFIG_ISAL 1 00:06:51.834 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:51.834 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:51.835 #define SPDK_CONFIG_LIBDIR 00:06:51.835 #undef SPDK_CONFIG_LTO 00:06:51.835 #define SPDK_CONFIG_MAX_LCORES 128 00:06:51.835 #define SPDK_CONFIG_NVME_CUSE 1 00:06:51.835 #undef SPDK_CONFIG_OCF 00:06:51.835 #define SPDK_CONFIG_OCF_PATH 00:06:51.835 #define SPDK_CONFIG_OPENSSL_PATH 00:06:51.835 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:51.835 #define SPDK_CONFIG_PGO_DIR 00:06:51.835 #undef SPDK_CONFIG_PGO_USE 00:06:51.835 #define SPDK_CONFIG_PREFIX /usr/local 00:06:51.835 #undef SPDK_CONFIG_RAID5F 00:06:51.835 #undef SPDK_CONFIG_RBD 00:06:51.835 #define SPDK_CONFIG_RDMA 1 00:06:51.835 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:51.835 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:51.835 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:51.835 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:51.835 #define SPDK_CONFIG_SHARED 1 00:06:51.835 #undef SPDK_CONFIG_SMA 00:06:51.835 #define SPDK_CONFIG_TESTS 1 00:06:51.835 #undef SPDK_CONFIG_TSAN 00:06:51.835 #define SPDK_CONFIG_UBLK 1 00:06:51.835 #define SPDK_CONFIG_UBSAN 1 00:06:51.835 #undef SPDK_CONFIG_UNIT_TESTS 00:06:51.835 #undef SPDK_CONFIG_URING 00:06:51.835 #define SPDK_CONFIG_URING_PATH 00:06:51.835 #undef SPDK_CONFIG_URING_ZNS 00:06:51.835 #define SPDK_CONFIG_USDT 1 00:06:51.835 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:51.835 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:51.835 #undef SPDK_CONFIG_VFIO_USER 00:06:51.835 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:51.835 #define SPDK_CONFIG_VHOST 1 00:06:51.835 #define SPDK_CONFIG_VIRTIO 1 00:06:51.835 #undef SPDK_CONFIG_VTUNE 00:06:51.835 #define SPDK_CONFIG_VTUNE_DIR 00:06:51.835 #define SPDK_CONFIG_WERROR 1 00:06:51.835 #define SPDK_CONFIG_WPDK_DIR 00:06:51.835 #undef SPDK_CONFIG_XNVME 00:06:51.835 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:51.835 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:51.836 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65243 ]] 00:06:51.837 14:23:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 65243 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.sbQb6s 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.sbQb6s/tests/target /tmp/spdk.sbQb6s 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264512512 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13787574272 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5241884672 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13787574272 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5241884672 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267756544 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=135168 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=95106707456 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4596072448 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:51.837 * Looking for test storage... 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13787574272 00:06:51.837 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:51.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:51.838 Cannot find device "nvmf_tgt_br" 00:06:51.838 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:51.839 Cannot find device "nvmf_tgt_br2" 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:51.839 Cannot find device "nvmf_tgt_br" 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:51.839 Cannot find device "nvmf_tgt_br2" 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:51.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:51.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:51.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:06:51.839 00:06:51.839 --- 10.0.0.2 ping statistics --- 00:06:51.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.839 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:51.839 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:51.839 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:06:51.839 00:06:51.839 --- 10.0.0.3 ping statistics --- 00:06:51.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.839 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:51.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:06:51.839 00:06:51.839 --- 10.0.0.1 ping statistics --- 00:06:51.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.839 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:51.839 ************************************ 00:06:51.839 START TEST nvmf_filesystem_no_in_capsule 00:06:51.839 ************************************ 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65409 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65409 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65409 ']' 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.839 14:23:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.839 [2024-07-15 14:23:30.470284] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:51.840 [2024-07-15 14:23:30.470394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.840 [2024-07-15 14:23:30.607385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.840 [2024-07-15 14:23:30.677239] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.840 [2024-07-15 14:23:30.677288] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.840 [2024-07-15 14:23:30.677300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.840 [2024-07-15 14:23:30.677308] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.840 [2024-07-15 14:23:30.677315] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.840 [2024-07-15 14:23:30.677408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.840 [2024-07-15 14:23:30.677629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.840 [2024-07-15 14:23:30.678144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.840 [2024-07-15 14:23:30.678166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.840 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.840 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:51.840 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:51.840 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:51.840 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.097 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.098 [2024-07-15 14:23:31.463208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.098 Malloc1 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.098 [2024-07-15 14:23:31.592015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:52.098 { 00:06:52.098 "aliases": [ 00:06:52.098 "5525b226-1d44-4998-99f3-bd46da604391" 00:06:52.098 ], 00:06:52.098 "assigned_rate_limits": { 00:06:52.098 "r_mbytes_per_sec": 0, 00:06:52.098 "rw_ios_per_sec": 0, 00:06:52.098 "rw_mbytes_per_sec": 0, 00:06:52.098 "w_mbytes_per_sec": 0 00:06:52.098 }, 00:06:52.098 "block_size": 512, 00:06:52.098 "claim_type": "exclusive_write", 00:06:52.098 "claimed": true, 00:06:52.098 "driver_specific": {}, 00:06:52.098 "memory_domains": [ 00:06:52.098 { 00:06:52.098 "dma_device_id": "system", 00:06:52.098 "dma_device_type": 1 00:06:52.098 }, 00:06:52.098 { 00:06:52.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.098 "dma_device_type": 2 00:06:52.098 } 00:06:52.098 ], 00:06:52.098 "name": "Malloc1", 00:06:52.098 "num_blocks": 1048576, 00:06:52.098 "product_name": "Malloc disk", 00:06:52.098 "supported_io_types": { 00:06:52.098 "abort": true, 00:06:52.098 "compare": false, 00:06:52.098 "compare_and_write": false, 00:06:52.098 "copy": true, 00:06:52.098 "flush": true, 00:06:52.098 "get_zone_info": false, 00:06:52.098 "nvme_admin": false, 00:06:52.098 "nvme_io": false, 00:06:52.098 "nvme_io_md": false, 00:06:52.098 "nvme_iov_md": false, 00:06:52.098 "read": true, 00:06:52.098 "reset": true, 00:06:52.098 "seek_data": false, 00:06:52.098 "seek_hole": false, 00:06:52.098 "unmap": true, 00:06:52.098 "write": true, 00:06:52.098 "write_zeroes": true, 00:06:52.098 "zcopy": true, 00:06:52.098 "zone_append": false, 00:06:52.098 "zone_management": false 00:06:52.098 }, 00:06:52.098 "uuid": "5525b226-1d44-4998-99f3-bd46da604391", 00:06:52.098 "zoned": false 00:06:52.098 } 00:06:52.098 ]' 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:52.098 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:52.357 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:52.357 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:52.357 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:52.357 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:52.357 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:52.357 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:52.357 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:52.357 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:52.357 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:52.357 14:23:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:54.890 14:23:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:54.890 14:23:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.457 ************************************ 00:06:55.457 START TEST filesystem_ext4 00:06:55.457 ************************************ 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:55.457 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:55.457 mke2fs 1.46.5 (30-Dec-2021) 00:06:55.715 Discarding device blocks: 0/522240 done 00:06:55.715 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:55.715 Filesystem UUID: 8c09c0bc-883e-4968-8671-4d24ff94acb9 00:06:55.715 Superblock backups stored on blocks: 00:06:55.715 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:55.715 00:06:55.715 Allocating group tables: 0/64 done 00:06:55.715 Writing inode tables: 0/64 done 00:06:55.715 Creating journal (8192 blocks): done 00:06:55.715 Writing superblocks and filesystem accounting information: 0/64 done 00:06:55.715 00:06:55.715 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:55.715 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:55.715 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:55.715 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:55.715 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:55.715 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:55.715 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:55.715 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:55.715 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65409 00:06:55.715 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:55.715 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:55.715 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:55.715 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:55.716 00:06:55.716 real 0m0.255s 00:06:55.716 user 0m0.023s 00:06:55.716 sys 0m0.046s 00:06:55.716 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.716 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:55.716 ************************************ 00:06:55.716 END TEST filesystem_ext4 00:06:55.716 ************************************ 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.974 ************************************ 00:06:55.974 START TEST filesystem_btrfs 00:06:55.974 ************************************ 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:55.974 btrfs-progs v6.6.2 00:06:55.974 See https://btrfs.readthedocs.io for more information. 00:06:55.974 00:06:55.974 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:55.974 NOTE: several default settings have changed in version 5.15, please make sure 00:06:55.974 this does not affect your deployments: 00:06:55.974 - DUP for metadata (-m dup) 00:06:55.974 - enabled no-holes (-O no-holes) 00:06:55.974 - enabled free-space-tree (-R free-space-tree) 00:06:55.974 00:06:55.974 Label: (null) 00:06:55.974 UUID: 25721573-a629-4f31-930d-a5f7f5e292d4 00:06:55.974 Node size: 16384 00:06:55.974 Sector size: 4096 00:06:55.974 Filesystem size: 510.00MiB 00:06:55.974 Block group profiles: 00:06:55.974 Data: single 8.00MiB 00:06:55.974 Metadata: DUP 32.00MiB 00:06:55.974 System: DUP 8.00MiB 00:06:55.974 SSD detected: yes 00:06:55.974 Zoned device: no 00:06:55.974 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:55.974 Runtime features: free-space-tree 00:06:55.974 Checksum: crc32c 00:06:55.974 Number of devices: 1 00:06:55.974 Devices: 00:06:55.974 ID SIZE PATH 00:06:55.974 1 510.00MiB /dev/nvme0n1p1 00:06:55.974 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:55.974 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65409 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:55.975 00:06:55.975 real 0m0.172s 00:06:55.975 user 0m0.020s 00:06:55.975 sys 0m0.057s 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:55.975 ************************************ 00:06:55.975 END TEST filesystem_btrfs 00:06:55.975 ************************************ 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:55.975 ************************************ 00:06:55.975 START TEST filesystem_xfs 00:06:55.975 ************************************ 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:55.975 14:23:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:56.234 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:56.234 = sectsz=512 attr=2, projid32bit=1 00:06:56.234 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:56.234 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:56.234 data = bsize=4096 blocks=130560, imaxpct=25 00:06:56.234 = sunit=0 swidth=0 blks 00:06:56.234 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:56.234 log =internal log bsize=4096 blocks=16384, version=2 00:06:56.234 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:56.234 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:56.801 Discarding blocks...Done. 00:06:56.801 14:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:56.801 14:23:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65409 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:59.333 00:06:59.333 real 0m3.094s 00:06:59.333 user 0m0.015s 00:06:59.333 sys 0m0.053s 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:59.333 ************************************ 00:06:59.333 END TEST filesystem_xfs 00:06:59.333 ************************************ 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:59.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65409 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65409 ']' 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65409 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65409 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.333 killing process with pid 65409 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65409' 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65409 00:06:59.333 14:23:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65409 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:59.592 00:06:59.592 real 0m8.659s 00:06:59.592 user 0m32.812s 00:06:59.592 sys 0m1.436s 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.592 ************************************ 00:06:59.592 END TEST nvmf_filesystem_no_in_capsule 00:06:59.592 ************************************ 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.592 ************************************ 00:06:59.592 START TEST nvmf_filesystem_in_capsule 00:06:59.592 ************************************ 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65710 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65710 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65710 ']' 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.592 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.592 [2024-07-15 14:23:39.175351] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:06:59.592 [2024-07-15 14:23:39.175437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.851 [2024-07-15 14:23:39.311345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.851 [2024-07-15 14:23:39.371094] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:59.851 [2024-07-15 14:23:39.371147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:59.851 [2024-07-15 14:23:39.371158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:59.851 [2024-07-15 14:23:39.371166] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:59.851 [2024-07-15 14:23:39.371173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:59.851 [2024-07-15 14:23:39.371357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.851 [2024-07-15 14:23:39.371491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.851 [2024-07-15 14:23:39.371610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.851 [2024-07-15 14:23:39.371610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.110 [2024-07-15 14:23:39.487616] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.110 Malloc1 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.110 [2024-07-15 14:23:39.612244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.110 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:00.110 { 00:07:00.110 "aliases": [ 00:07:00.110 "dab8bf9a-341c-452b-beec-89e0fbe6b8a9" 00:07:00.110 ], 00:07:00.110 "assigned_rate_limits": { 00:07:00.110 "r_mbytes_per_sec": 0, 00:07:00.110 "rw_ios_per_sec": 0, 00:07:00.110 "rw_mbytes_per_sec": 0, 00:07:00.110 "w_mbytes_per_sec": 0 00:07:00.110 }, 00:07:00.110 "block_size": 512, 00:07:00.110 "claim_type": "exclusive_write", 00:07:00.110 "claimed": true, 00:07:00.110 "driver_specific": {}, 00:07:00.110 "memory_domains": [ 00:07:00.110 { 00:07:00.110 "dma_device_id": "system", 00:07:00.110 "dma_device_type": 1 00:07:00.110 }, 00:07:00.110 { 00:07:00.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.110 "dma_device_type": 2 00:07:00.110 } 00:07:00.110 ], 00:07:00.110 "name": "Malloc1", 00:07:00.110 "num_blocks": 1048576, 00:07:00.110 "product_name": "Malloc disk", 00:07:00.110 "supported_io_types": { 00:07:00.110 "abort": true, 00:07:00.110 "compare": false, 00:07:00.110 "compare_and_write": false, 00:07:00.110 "copy": true, 00:07:00.110 "flush": true, 00:07:00.110 "get_zone_info": false, 00:07:00.111 "nvme_admin": false, 00:07:00.111 "nvme_io": false, 00:07:00.111 "nvme_io_md": false, 00:07:00.111 "nvme_iov_md": false, 00:07:00.111 "read": true, 00:07:00.111 "reset": true, 00:07:00.111 "seek_data": false, 00:07:00.111 "seek_hole": false, 00:07:00.111 "unmap": true, 00:07:00.111 "write": true, 00:07:00.111 "write_zeroes": true, 00:07:00.111 "zcopy": true, 00:07:00.111 "zone_append": false, 00:07:00.111 "zone_management": false 00:07:00.111 }, 00:07:00.111 "uuid": "dab8bf9a-341c-452b-beec-89e0fbe6b8a9", 00:07:00.111 "zoned": false 00:07:00.111 } 00:07:00.111 ]' 00:07:00.111 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:00.111 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:00.111 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:00.369 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:00.369 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:00.369 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:00.369 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:00.369 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:00.369 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:00.369 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:00.369 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:00.369 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:00.369 14:23:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:02.897 14:23:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:02.897 14:23:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.465 ************************************ 00:07:03.465 START TEST filesystem_in_capsule_ext4 00:07:03.465 ************************************ 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:03.465 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:03.465 mke2fs 1.46.5 (30-Dec-2021) 00:07:03.724 Discarding device blocks: 0/522240 done 00:07:03.724 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:03.724 Filesystem UUID: 8f0ae846-7556-4b83-a464-4ae8fdbc1404 00:07:03.724 Superblock backups stored on blocks: 00:07:03.724 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:03.724 00:07:03.724 Allocating group tables: 0/64 done 00:07:03.724 Writing inode tables: 0/64 done 00:07:03.724 Creating journal (8192 blocks): done 00:07:03.724 Writing superblocks and filesystem accounting information: 0/64 done 00:07:03.724 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65710 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:03.724 00:07:03.724 real 0m0.262s 00:07:03.724 user 0m0.020s 00:07:03.724 sys 0m0.056s 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.724 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:03.724 ************************************ 00:07:03.724 END TEST filesystem_in_capsule_ext4 00:07:03.724 ************************************ 00:07:03.981 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:03.981 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:03.981 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:03.981 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.981 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.981 ************************************ 00:07:03.981 START TEST filesystem_in_capsule_btrfs 00:07:03.981 ************************************ 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:03.982 btrfs-progs v6.6.2 00:07:03.982 See https://btrfs.readthedocs.io for more information. 00:07:03.982 00:07:03.982 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:03.982 NOTE: several default settings have changed in version 5.15, please make sure 00:07:03.982 this does not affect your deployments: 00:07:03.982 - DUP for metadata (-m dup) 00:07:03.982 - enabled no-holes (-O no-holes) 00:07:03.982 - enabled free-space-tree (-R free-space-tree) 00:07:03.982 00:07:03.982 Label: (null) 00:07:03.982 UUID: 5a490f1e-a918-466d-be98-31616c4909b8 00:07:03.982 Node size: 16384 00:07:03.982 Sector size: 4096 00:07:03.982 Filesystem size: 510.00MiB 00:07:03.982 Block group profiles: 00:07:03.982 Data: single 8.00MiB 00:07:03.982 Metadata: DUP 32.00MiB 00:07:03.982 System: DUP 8.00MiB 00:07:03.982 SSD detected: yes 00:07:03.982 Zoned device: no 00:07:03.982 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:03.982 Runtime features: free-space-tree 00:07:03.982 Checksum: crc32c 00:07:03.982 Number of devices: 1 00:07:03.982 Devices: 00:07:03.982 ID SIZE PATH 00:07:03.982 1 510.00MiB /dev/nvme0n1p1 00:07:03.982 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65710 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:03.982 00:07:03.982 real 0m0.170s 00:07:03.982 user 0m0.021s 00:07:03.982 sys 0m0.061s 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:03.982 ************************************ 00:07:03.982 END TEST filesystem_in_capsule_btrfs 00:07:03.982 ************************************ 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.982 ************************************ 00:07:03.982 START TEST filesystem_in_capsule_xfs 00:07:03.982 ************************************ 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:03.982 14:23:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:04.239 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:04.239 = sectsz=512 attr=2, projid32bit=1 00:07:04.239 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:04.239 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:04.239 data = bsize=4096 blocks=130560, imaxpct=25 00:07:04.239 = sunit=0 swidth=0 blks 00:07:04.239 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:04.239 log =internal log bsize=4096 blocks=16384, version=2 00:07:04.239 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:04.239 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:04.805 Discarding blocks...Done. 00:07:04.805 14:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:04.805 14:23:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65710 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:06.726 00:07:06.726 real 0m2.564s 00:07:06.726 user 0m0.018s 00:07:06.726 sys 0m0.053s 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:06.726 ************************************ 00:07:06.726 END TEST filesystem_in_capsule_xfs 00:07:06.726 ************************************ 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:06.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65710 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65710 ']' 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65710 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65710 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.726 killing process with pid 65710 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65710' 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65710 00:07:06.726 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65710 00:07:06.985 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:06.985 00:07:06.985 real 0m7.435s 00:07:06.985 user 0m27.799s 00:07:06.985 sys 0m1.459s 00:07:06.985 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.985 14:23:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.985 ************************************ 00:07:06.985 END TEST nvmf_filesystem_in_capsule 00:07:06.985 ************************************ 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:07.243 rmmod nvme_tcp 00:07:07.243 rmmod nvme_fabrics 00:07:07.243 rmmod nvme_keyring 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:07.243 00:07:07.243 real 0m16.882s 00:07:07.243 user 1m0.832s 00:07:07.243 sys 0m3.283s 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.243 14:23:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.243 ************************************ 00:07:07.243 END TEST nvmf_filesystem 00:07:07.243 ************************************ 00:07:07.243 14:23:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:07.243 14:23:46 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:07.243 14:23:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:07.243 14:23:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.243 14:23:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.243 ************************************ 00:07:07.243 START TEST nvmf_target_discovery 00:07:07.243 ************************************ 00:07:07.243 14:23:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:07.243 * Looking for test storage... 00:07:07.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:07.243 14:23:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.501 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:07.502 Cannot find device "nvmf_tgt_br" 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:07.502 Cannot find device "nvmf_tgt_br2" 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:07.502 Cannot find device "nvmf_tgt_br" 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:07.502 Cannot find device "nvmf_tgt_br2" 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:07.502 14:23:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:07.502 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:07.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:07.502 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:07:07.502 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:07.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:07.502 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:07:07.502 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:07.502 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:07.502 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:07.502 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:07.502 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:07.502 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:07.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:07:07.760 00:07:07.760 --- 10.0.0.2 ping statistics --- 00:07:07.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.760 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:07.760 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:07.760 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:07:07.760 00:07:07.760 --- 10.0.0.3 ping statistics --- 00:07:07.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.760 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:07.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:07.760 00:07:07.760 --- 10.0.0.1 ping statistics --- 00:07:07.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.760 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=66143 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 66143 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 66143 ']' 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.760 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:07.760 [2024-07-15 14:23:47.327280] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:07.761 [2024-07-15 14:23:47.327634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.019 [2024-07-15 14:23:47.470348] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.019 [2024-07-15 14:23:47.541935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.019 [2024-07-15 14:23:47.541997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.019 [2024-07-15 14:23:47.542011] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.019 [2024-07-15 14:23:47.542025] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.019 [2024-07-15 14:23:47.542039] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.019 [2024-07-15 14:23:47.542182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.019 [2024-07-15 14:23:47.542419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.019 [2024-07-15 14:23:47.542760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.019 [2024-07-15 14:23:47.542772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 [2024-07-15 14:23:47.689385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 Null1 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 [2024-07-15 14:23:47.750244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 Null2 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 Null3 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 Null4 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.277 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -a 10.0.0.2 -s 4420 00:07:08.535 00:07:08.535 Discovery Log Number of Records 6, Generation counter 6 00:07:08.535 =====Discovery Log Entry 0====== 00:07:08.535 trtype: tcp 00:07:08.535 adrfam: ipv4 00:07:08.535 subtype: current discovery subsystem 00:07:08.535 treq: not required 00:07:08.535 portid: 0 00:07:08.535 trsvcid: 4420 00:07:08.535 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:08.535 traddr: 10.0.0.2 00:07:08.535 eflags: explicit discovery connections, duplicate discovery information 00:07:08.535 sectype: none 00:07:08.535 =====Discovery Log Entry 1====== 00:07:08.535 trtype: tcp 00:07:08.535 adrfam: ipv4 00:07:08.535 subtype: nvme subsystem 00:07:08.535 treq: not required 00:07:08.535 portid: 0 00:07:08.535 trsvcid: 4420 00:07:08.535 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:08.535 traddr: 10.0.0.2 00:07:08.535 eflags: none 00:07:08.535 sectype: none 00:07:08.535 =====Discovery Log Entry 2====== 00:07:08.535 trtype: tcp 00:07:08.535 adrfam: ipv4 00:07:08.535 subtype: nvme subsystem 00:07:08.535 treq: not required 00:07:08.535 portid: 0 00:07:08.535 trsvcid: 4420 00:07:08.535 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:08.535 traddr: 10.0.0.2 00:07:08.535 eflags: none 00:07:08.535 sectype: none 00:07:08.535 =====Discovery Log Entry 3====== 00:07:08.535 trtype: tcp 00:07:08.535 adrfam: ipv4 00:07:08.535 subtype: nvme subsystem 00:07:08.535 treq: not required 00:07:08.535 portid: 0 00:07:08.535 trsvcid: 4420 00:07:08.535 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:08.535 traddr: 10.0.0.2 00:07:08.535 eflags: none 00:07:08.535 sectype: none 00:07:08.535 =====Discovery Log Entry 4====== 00:07:08.535 trtype: tcp 00:07:08.535 adrfam: ipv4 00:07:08.535 subtype: nvme subsystem 00:07:08.535 treq: not required 00:07:08.535 portid: 0 00:07:08.535 trsvcid: 4420 00:07:08.535 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:08.535 traddr: 10.0.0.2 00:07:08.535 eflags: none 00:07:08.535 sectype: none 00:07:08.535 =====Discovery Log Entry 5====== 00:07:08.535 trtype: tcp 00:07:08.535 adrfam: ipv4 00:07:08.535 subtype: discovery subsystem referral 00:07:08.535 treq: not required 00:07:08.535 portid: 0 00:07:08.535 trsvcid: 4430 00:07:08.535 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:08.535 traddr: 10.0.0.2 00:07:08.535 eflags: none 00:07:08.535 sectype: none 00:07:08.535 Perform nvmf subsystem discovery via RPC 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 [ 00:07:08.535 { 00:07:08.535 "allow_any_host": true, 00:07:08.535 "hosts": [], 00:07:08.535 "listen_addresses": [ 00:07:08.535 { 00:07:08.535 "adrfam": "IPv4", 00:07:08.535 "traddr": "10.0.0.2", 00:07:08.535 "trsvcid": "4420", 00:07:08.535 "trtype": "TCP" 00:07:08.535 } 00:07:08.535 ], 00:07:08.535 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:08.535 "subtype": "Discovery" 00:07:08.535 }, 00:07:08.535 { 00:07:08.535 "allow_any_host": true, 00:07:08.535 "hosts": [], 00:07:08.535 "listen_addresses": [ 00:07:08.535 { 00:07:08.535 "adrfam": "IPv4", 00:07:08.535 "traddr": "10.0.0.2", 00:07:08.535 "trsvcid": "4420", 00:07:08.535 "trtype": "TCP" 00:07:08.535 } 00:07:08.535 ], 00:07:08.535 "max_cntlid": 65519, 00:07:08.535 "max_namespaces": 32, 00:07:08.535 "min_cntlid": 1, 00:07:08.535 "model_number": "SPDK bdev Controller", 00:07:08.535 "namespaces": [ 00:07:08.535 { 00:07:08.535 "bdev_name": "Null1", 00:07:08.535 "name": "Null1", 00:07:08.535 "nguid": "DD2457A1A36E408EA61B8AE4E75D91B8", 00:07:08.535 "nsid": 1, 00:07:08.535 "uuid": "dd2457a1-a36e-408e-a61b-8ae4e75d91b8" 00:07:08.535 } 00:07:08.535 ], 00:07:08.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:08.535 "serial_number": "SPDK00000000000001", 00:07:08.535 "subtype": "NVMe" 00:07:08.535 }, 00:07:08.535 { 00:07:08.535 "allow_any_host": true, 00:07:08.535 "hosts": [], 00:07:08.535 "listen_addresses": [ 00:07:08.535 { 00:07:08.535 "adrfam": "IPv4", 00:07:08.535 "traddr": "10.0.0.2", 00:07:08.535 "trsvcid": "4420", 00:07:08.535 "trtype": "TCP" 00:07:08.535 } 00:07:08.535 ], 00:07:08.535 "max_cntlid": 65519, 00:07:08.535 "max_namespaces": 32, 00:07:08.535 "min_cntlid": 1, 00:07:08.535 "model_number": "SPDK bdev Controller", 00:07:08.535 "namespaces": [ 00:07:08.535 { 00:07:08.535 "bdev_name": "Null2", 00:07:08.535 "name": "Null2", 00:07:08.535 "nguid": "8653CE1EC34642649BBB14C98B16B643", 00:07:08.535 "nsid": 1, 00:07:08.535 "uuid": "8653ce1e-c346-4264-9bbb-14c98b16b643" 00:07:08.535 } 00:07:08.535 ], 00:07:08.535 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:08.535 "serial_number": "SPDK00000000000002", 00:07:08.535 "subtype": "NVMe" 00:07:08.535 }, 00:07:08.535 { 00:07:08.535 "allow_any_host": true, 00:07:08.535 "hosts": [], 00:07:08.535 "listen_addresses": [ 00:07:08.535 { 00:07:08.535 "adrfam": "IPv4", 00:07:08.535 "traddr": "10.0.0.2", 00:07:08.535 "trsvcid": "4420", 00:07:08.535 "trtype": "TCP" 00:07:08.535 } 00:07:08.535 ], 00:07:08.535 "max_cntlid": 65519, 00:07:08.535 "max_namespaces": 32, 00:07:08.535 "min_cntlid": 1, 00:07:08.535 "model_number": "SPDK bdev Controller", 00:07:08.535 "namespaces": [ 00:07:08.535 { 00:07:08.535 "bdev_name": "Null3", 00:07:08.535 "name": "Null3", 00:07:08.535 "nguid": "C9767EFDAECE4100940D569A2681BA31", 00:07:08.535 "nsid": 1, 00:07:08.535 "uuid": "c9767efd-aece-4100-940d-569a2681ba31" 00:07:08.535 } 00:07:08.535 ], 00:07:08.535 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:08.535 "serial_number": "SPDK00000000000003", 00:07:08.535 "subtype": "NVMe" 00:07:08.535 }, 00:07:08.535 { 00:07:08.535 "allow_any_host": true, 00:07:08.535 "hosts": [], 00:07:08.535 "listen_addresses": [ 00:07:08.535 { 00:07:08.535 "adrfam": "IPv4", 00:07:08.535 "traddr": "10.0.0.2", 00:07:08.535 "trsvcid": "4420", 00:07:08.535 "trtype": "TCP" 00:07:08.535 } 00:07:08.535 ], 00:07:08.535 "max_cntlid": 65519, 00:07:08.535 "max_namespaces": 32, 00:07:08.535 "min_cntlid": 1, 00:07:08.535 "model_number": "SPDK bdev Controller", 00:07:08.535 "namespaces": [ 00:07:08.535 { 00:07:08.535 "bdev_name": "Null4", 00:07:08.535 "name": "Null4", 00:07:08.535 "nguid": "9709F8DE86A34305B17BCFC0D50BACF3", 00:07:08.535 "nsid": 1, 00:07:08.535 "uuid": "9709f8de-86a3-4305-b17b-cfc0d50bacf3" 00:07:08.535 } 00:07:08.535 ], 00:07:08.535 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:08.535 "serial_number": "SPDK00000000000004", 00:07:08.535 "subtype": "NVMe" 00:07:08.535 } 00:07:08.535 ] 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.535 14:23:47 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:08.535 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:08.792 rmmod nvme_tcp 00:07:08.792 rmmod nvme_fabrics 00:07:08.792 rmmod nvme_keyring 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 66143 ']' 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 66143 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 66143 ']' 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 66143 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66143 00:07:08.792 killing process with pid 66143 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66143' 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 66143 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 66143 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.792 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.049 14:23:48 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:09.049 00:07:09.049 real 0m1.638s 00:07:09.049 user 0m3.452s 00:07:09.049 sys 0m0.486s 00:07:09.049 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.049 14:23:48 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:09.049 ************************************ 00:07:09.049 END TEST nvmf_target_discovery 00:07:09.049 ************************************ 00:07:09.049 14:23:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:09.049 14:23:48 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:09.049 14:23:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:09.049 14:23:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.049 14:23:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:09.049 ************************************ 00:07:09.049 START TEST nvmf_referrals 00:07:09.049 ************************************ 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:09.049 * Looking for test storage... 00:07:09.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.049 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:09.050 Cannot find device "nvmf_tgt_br" 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:09.050 Cannot find device "nvmf_tgt_br2" 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:09.050 Cannot find device "nvmf_tgt_br" 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:09.050 Cannot find device "nvmf_tgt_br2" 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:09.050 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:09.307 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:09.307 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:09.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:07:09.307 00:07:09.307 --- 10.0.0.2 ping statistics --- 00:07:09.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.307 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:09.307 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:09.307 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:07:09.307 00:07:09.307 --- 10.0.0.3 ping statistics --- 00:07:09.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.307 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:09.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:07:09.307 00:07:09.307 --- 10.0.0.1 ping statistics --- 00:07:09.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.307 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:09.307 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66353 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66353 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66353 ']' 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.308 14:23:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:09.565 [2024-07-15 14:23:48.938258] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:09.565 [2024-07-15 14:23:48.938361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.565 [2024-07-15 14:23:49.082754] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.822 [2024-07-15 14:23:49.167668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.822 [2024-07-15 14:23:49.167756] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.822 [2024-07-15 14:23:49.167775] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.822 [2024-07-15 14:23:49.167788] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.822 [2024-07-15 14:23:49.167799] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.822 [2024-07-15 14:23:49.167911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.822 [2024-07-15 14:23:49.168496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.822 [2024-07-15 14:23:49.168598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.822 [2024-07-15 14:23:49.168605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.388 14:23:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.388 14:23:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:10.388 14:23:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:10.388 14:23:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:10.388 14:23:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.646 14:23:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.646 14:23:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:10.646 14:23:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.646 14:23:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.646 [2024-07-15 14:23:50.002672] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.646 [2024-07-15 14:23:50.020162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:10.646 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:10.904 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:11.162 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.421 14:23:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:11.421 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.421 14:23:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:11.421 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.421 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:11.421 14:23:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:11.421 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:11.679 rmmod nvme_tcp 00:07:11.679 rmmod nvme_fabrics 00:07:11.679 rmmod nvme_keyring 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:11.679 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:11.680 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66353 ']' 00:07:11.680 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66353 00:07:11.680 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66353 ']' 00:07:11.680 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66353 00:07:11.680 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:11.680 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.680 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66353 00:07:11.680 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.680 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.680 killing process with pid 66353 00:07:11.680 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66353' 00:07:11.680 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66353 00:07:11.680 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66353 00:07:11.938 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:11.938 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:11.938 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:11.938 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:11.938 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:11.938 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.938 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.938 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.938 14:23:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:11.938 00:07:11.938 real 0m3.001s 00:07:11.938 user 0m10.022s 00:07:11.938 sys 0m0.771s 00:07:11.938 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.938 ************************************ 00:07:11.938 END TEST nvmf_referrals 00:07:11.938 14:23:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:11.938 ************************************ 00:07:11.938 14:23:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:11.938 14:23:51 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:11.938 14:23:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:11.938 14:23:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.938 14:23:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:11.938 ************************************ 00:07:11.938 START TEST nvmf_connect_disconnect 00:07:11.938 ************************************ 00:07:11.938 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:12.259 * Looking for test storage... 00:07:12.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.259 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:12.260 Cannot find device "nvmf_tgt_br" 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:12.260 Cannot find device "nvmf_tgt_br2" 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:12.260 Cannot find device "nvmf_tgt_br" 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:12.260 Cannot find device "nvmf_tgt_br2" 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:12.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:12.260 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:12.260 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:12.549 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:12.549 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:12.549 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:12.549 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:12.549 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:12.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:07:12.549 00:07:12.549 --- 10.0.0.2 ping statistics --- 00:07:12.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.549 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:07:12.549 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:12.549 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:12.549 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:07:12.549 00:07:12.549 --- 10.0.0.3 ping statistics --- 00:07:12.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.549 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:07:12.549 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:12.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:07:12.549 00:07:12.549 --- 10.0.0.1 ping statistics --- 00:07:12.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.549 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:12.549 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.549 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:07:12.549 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:12.549 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66656 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66656 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66656 ']' 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.550 14:23:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:12.550 [2024-07-15 14:23:51.963914] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:12.550 [2024-07-15 14:23:51.964016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.550 [2024-07-15 14:23:52.098994] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.808 [2024-07-15 14:23:52.166884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.808 [2024-07-15 14:23:52.166955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.808 [2024-07-15 14:23:52.166967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.808 [2024-07-15 14:23:52.166976] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.808 [2024-07-15 14:23:52.166983] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.808 [2024-07-15 14:23:52.167096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.808 [2024-07-15 14:23:52.167172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.808 [2024-07-15 14:23:52.167574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.808 [2024-07-15 14:23:52.167582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:12.808 [2024-07-15 14:23:52.298422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:12.808 [2024-07-15 14:23:52.366240] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:12.808 14:23:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:15.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:24.217 rmmod nvme_tcp 00:07:24.217 rmmod nvme_fabrics 00:07:24.217 rmmod nvme_keyring 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66656 ']' 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66656 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66656 ']' 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66656 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66656 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66656' 00:07:24.217 killing process with pid 66656 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66656 00:07:24.217 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66656 00:07:24.475 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:24.475 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:24.475 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:24.475 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:24.475 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:24.475 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.475 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.475 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.475 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:24.475 00:07:24.475 real 0m12.443s 00:07:24.475 user 0m45.350s 00:07:24.475 sys 0m1.924s 00:07:24.475 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.475 14:24:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:24.475 ************************************ 00:07:24.475 END TEST nvmf_connect_disconnect 00:07:24.475 ************************************ 00:07:24.475 14:24:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:24.475 14:24:03 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:24.475 14:24:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:24.475 14:24:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.475 14:24:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.475 ************************************ 00:07:24.475 START TEST nvmf_multitarget 00:07:24.475 ************************************ 00:07:24.475 14:24:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:24.475 * Looking for test storage... 00:07:24.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.475 14:24:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:24.476 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:24.733 Cannot find device "nvmf_tgt_br" 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:24.733 Cannot find device "nvmf_tgt_br2" 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:24.733 Cannot find device "nvmf_tgt_br" 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:24.733 Cannot find device "nvmf_tgt_br2" 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:24.733 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:24.733 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:24.733 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:24.991 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:24.991 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:24.991 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:24.991 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:24.991 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:24.991 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:24.991 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:24.991 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:24.991 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:24.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:07:24.991 00:07:24.991 --- 10.0.0.2 ping statistics --- 00:07:24.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.991 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:07:24.991 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:24.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:24.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:07:24.991 00:07:24.991 --- 10.0.0.3 ping statistics --- 00:07:24.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.992 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:24.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:07:24.992 00:07:24.992 --- 10.0.0.1 ping statistics --- 00:07:24.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.992 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=67039 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 67039 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 67039 ']' 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:24.992 14:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:24.992 [2024-07-15 14:24:04.506215] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:24.992 [2024-07-15 14:24:04.506341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.250 [2024-07-15 14:24:04.655829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.250 [2024-07-15 14:24:04.727282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.250 [2024-07-15 14:24:04.727339] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.250 [2024-07-15 14:24:04.727365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.250 [2024-07-15 14:24:04.727380] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.250 [2024-07-15 14:24:04.727391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.250 [2024-07-15 14:24:04.727561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.250 [2024-07-15 14:24:04.727641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.250 [2024-07-15 14:24:04.728233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.250 [2024-07-15 14:24:04.728252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.182 14:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.182 14:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:26.182 14:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:26.182 14:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:26.182 14:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:26.182 14:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.182 14:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:26.182 14:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:26.182 14:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:26.182 14:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:26.182 14:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:26.440 "nvmf_tgt_1" 00:07:26.440 14:24:05 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:26.440 "nvmf_tgt_2" 00:07:26.440 14:24:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:26.440 14:24:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:26.696 14:24:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:26.696 14:24:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:26.953 true 00:07:26.953 14:24:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:26.953 true 00:07:26.953 14:24:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:26.953 14:24:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:27.211 rmmod nvme_tcp 00:07:27.211 rmmod nvme_fabrics 00:07:27.211 rmmod nvme_keyring 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 67039 ']' 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 67039 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 67039 ']' 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 67039 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67039 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:27.211 killing process with pid 67039 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67039' 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 67039 00:07:27.211 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 67039 00:07:27.469 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:27.469 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:27.469 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:27.469 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:27.469 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:27.469 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.469 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.469 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.469 14:24:06 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:27.469 00:07:27.469 real 0m3.006s 00:07:27.469 user 0m10.297s 00:07:27.469 sys 0m0.632s 00:07:27.469 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.469 14:24:06 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:27.469 ************************************ 00:07:27.469 END TEST nvmf_multitarget 00:07:27.469 ************************************ 00:07:27.469 14:24:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:27.469 14:24:07 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:27.470 14:24:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:27.470 14:24:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.470 14:24:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:27.470 ************************************ 00:07:27.470 START TEST nvmf_rpc 00:07:27.470 ************************************ 00:07:27.470 14:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:27.727 * Looking for test storage... 00:07:27.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:27.727 14:24:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:27.727 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:27.728 Cannot find device "nvmf_tgt_br" 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:27.728 Cannot find device "nvmf_tgt_br2" 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:27.728 Cannot find device "nvmf_tgt_br" 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:27.728 Cannot find device "nvmf_tgt_br2" 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:27.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:27.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:27.728 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:27.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:27.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:07:27.985 00:07:27.985 --- 10.0.0.2 ping statistics --- 00:07:27.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.985 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:27.985 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:27.985 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:07:27.985 00:07:27.985 --- 10.0.0.3 ping statistics --- 00:07:27.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.985 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:27.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:27.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:07:27.985 00:07:27.985 --- 10.0.0.1 ping statistics --- 00:07:27.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:27.985 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=67273 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 67273 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67273 ']' 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.985 14:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.985 [2024-07-15 14:24:07.493990] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:27.985 [2024-07-15 14:24:07.494092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.243 [2024-07-15 14:24:07.625959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.243 [2024-07-15 14:24:07.709890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.243 [2024-07-15 14:24:07.709947] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.243 [2024-07-15 14:24:07.709958] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.243 [2024-07-15 14:24:07.709967] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.243 [2024-07-15 14:24:07.709974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.243 [2024-07-15 14:24:07.710068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.243 [2024-07-15 14:24:07.710208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.243 [2024-07-15 14:24:07.710530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.243 [2024-07-15 14:24:07.710534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:29.176 "poll_groups": [ 00:07:29.176 { 00:07:29.176 "admin_qpairs": 0, 00:07:29.176 "completed_nvme_io": 0, 00:07:29.176 "current_admin_qpairs": 0, 00:07:29.176 "current_io_qpairs": 0, 00:07:29.176 "io_qpairs": 0, 00:07:29.176 "name": "nvmf_tgt_poll_group_000", 00:07:29.176 "pending_bdev_io": 0, 00:07:29.176 "transports": [] 00:07:29.176 }, 00:07:29.176 { 00:07:29.176 "admin_qpairs": 0, 00:07:29.176 "completed_nvme_io": 0, 00:07:29.176 "current_admin_qpairs": 0, 00:07:29.176 "current_io_qpairs": 0, 00:07:29.176 "io_qpairs": 0, 00:07:29.176 "name": "nvmf_tgt_poll_group_001", 00:07:29.176 "pending_bdev_io": 0, 00:07:29.176 "transports": [] 00:07:29.176 }, 00:07:29.176 { 00:07:29.176 "admin_qpairs": 0, 00:07:29.176 "completed_nvme_io": 0, 00:07:29.176 "current_admin_qpairs": 0, 00:07:29.176 "current_io_qpairs": 0, 00:07:29.176 "io_qpairs": 0, 00:07:29.176 "name": "nvmf_tgt_poll_group_002", 00:07:29.176 "pending_bdev_io": 0, 00:07:29.176 "transports": [] 00:07:29.176 }, 00:07:29.176 { 00:07:29.176 "admin_qpairs": 0, 00:07:29.176 "completed_nvme_io": 0, 00:07:29.176 "current_admin_qpairs": 0, 00:07:29.176 "current_io_qpairs": 0, 00:07:29.176 "io_qpairs": 0, 00:07:29.176 "name": "nvmf_tgt_poll_group_003", 00:07:29.176 "pending_bdev_io": 0, 00:07:29.176 "transports": [] 00:07:29.176 } 00:07:29.176 ], 00:07:29.176 "tick_rate": 2200000000 00:07:29.176 }' 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.176 [2024-07-15 14:24:08.658287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:29.176 "poll_groups": [ 00:07:29.176 { 00:07:29.176 "admin_qpairs": 0, 00:07:29.176 "completed_nvme_io": 0, 00:07:29.176 "current_admin_qpairs": 0, 00:07:29.176 "current_io_qpairs": 0, 00:07:29.176 "io_qpairs": 0, 00:07:29.176 "name": "nvmf_tgt_poll_group_000", 00:07:29.176 "pending_bdev_io": 0, 00:07:29.176 "transports": [ 00:07:29.176 { 00:07:29.176 "trtype": "TCP" 00:07:29.176 } 00:07:29.176 ] 00:07:29.176 }, 00:07:29.176 { 00:07:29.176 "admin_qpairs": 0, 00:07:29.176 "completed_nvme_io": 0, 00:07:29.176 "current_admin_qpairs": 0, 00:07:29.176 "current_io_qpairs": 0, 00:07:29.176 "io_qpairs": 0, 00:07:29.176 "name": "nvmf_tgt_poll_group_001", 00:07:29.176 "pending_bdev_io": 0, 00:07:29.176 "transports": [ 00:07:29.176 { 00:07:29.176 "trtype": "TCP" 00:07:29.176 } 00:07:29.176 ] 00:07:29.176 }, 00:07:29.176 { 00:07:29.176 "admin_qpairs": 0, 00:07:29.176 "completed_nvme_io": 0, 00:07:29.176 "current_admin_qpairs": 0, 00:07:29.176 "current_io_qpairs": 0, 00:07:29.176 "io_qpairs": 0, 00:07:29.176 "name": "nvmf_tgt_poll_group_002", 00:07:29.176 "pending_bdev_io": 0, 00:07:29.176 "transports": [ 00:07:29.176 { 00:07:29.176 "trtype": "TCP" 00:07:29.176 } 00:07:29.176 ] 00:07:29.176 }, 00:07:29.176 { 00:07:29.176 "admin_qpairs": 0, 00:07:29.176 "completed_nvme_io": 0, 00:07:29.176 "current_admin_qpairs": 0, 00:07:29.176 "current_io_qpairs": 0, 00:07:29.176 "io_qpairs": 0, 00:07:29.176 "name": "nvmf_tgt_poll_group_003", 00:07:29.176 "pending_bdev_io": 0, 00:07:29.176 "transports": [ 00:07:29.176 { 00:07:29.176 "trtype": "TCP" 00:07:29.176 } 00:07:29.176 ] 00:07:29.176 } 00:07:29.176 ], 00:07:29.176 "tick_rate": 2200000000 00:07:29.176 }' 00:07:29.176 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:29.177 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:29.177 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:29.177 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:29.177 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:29.177 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:29.177 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:29.177 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:29.177 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.434 Malloc1 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.434 [2024-07-15 14:24:08.835770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -a 10.0.0.2 -s 4420 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -a 10.0.0.2 -s 4420 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -a 10.0.0.2 -s 4420 00:07:29.434 [2024-07-15 14:24:08.858017] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95' 00:07:29.434 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:29.434 could not add new controller: failed to write to nvme-fabrics device 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:29.434 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.435 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:29.435 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.435 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.435 14:24:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.435 14:24:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.692 14:24:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:29.692 14:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:29.692 14:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:29.692 14:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:29.692 14:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:31.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:31.589 [2024-07-15 14:24:11.139059] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95' 00:07:31.589 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:31.589 could not add new controller: failed to write to nvme-fabrics device 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.589 14:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:31.847 14:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:31.847 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:31.847 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:31.847 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:31.847 14:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:33.746 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:33.746 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:33.746 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:33.746 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:33.746 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:33.746 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:33.746 14:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.004 [2024-07-15 14:24:13.414035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.004 14:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:34.262 14:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:34.262 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:34.262 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:34.262 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:34.262 14:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:36.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.161 [2024-07-15 14:24:15.708908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.161 14:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:36.419 14:24:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:36.419 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:36.419 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:36.419 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:36.419 14:24:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:38.317 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:38.317 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:38.317 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:38.317 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:38.317 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:38.317 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:38.317 14:24:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:38.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.576 [2024-07-15 14:24:17.996988] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.576 14:24:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.576 14:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.576 14:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:38.576 14:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.576 14:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.576 14:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.576 14:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:38.834 14:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:38.834 14:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:38.834 14:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:38.834 14:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:38.834 14:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:40.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.751 [2024-07-15 14:24:20.292979] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.751 14:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:41.009 14:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:41.009 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:41.009 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:41.009 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:41.009 14:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:42.910 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:42.910 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:42.910 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:42.910 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:42.910 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:42.910 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:42.910 14:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:43.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.168 [2024-07-15 14:24:22.672506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.168 14:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.426 14:24:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:43.426 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:43.426 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:43.426 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:43.426 14:24:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:45.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.359 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.359 [2024-07-15 14:24:24.951908] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.617 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 [2024-07-15 14:24:24.999839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 [2024-07-15 14:24:25.047887] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 [2024-07-15 14:24:25.099977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 [2024-07-15 14:24:25.147997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.618 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.619 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.619 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.619 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.619 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:45.619 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.619 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.619 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.619 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:45.619 "poll_groups": [ 00:07:45.619 { 00:07:45.619 "admin_qpairs": 2, 00:07:45.619 "completed_nvme_io": 115, 00:07:45.619 "current_admin_qpairs": 0, 00:07:45.619 "current_io_qpairs": 0, 00:07:45.619 "io_qpairs": 16, 00:07:45.619 "name": "nvmf_tgt_poll_group_000", 00:07:45.619 "pending_bdev_io": 0, 00:07:45.619 "transports": [ 00:07:45.619 { 00:07:45.619 "trtype": "TCP" 00:07:45.619 } 00:07:45.619 ] 00:07:45.619 }, 00:07:45.619 { 00:07:45.619 "admin_qpairs": 3, 00:07:45.619 "completed_nvme_io": 117, 00:07:45.619 "current_admin_qpairs": 0, 00:07:45.619 "current_io_qpairs": 0, 00:07:45.619 "io_qpairs": 17, 00:07:45.619 "name": "nvmf_tgt_poll_group_001", 00:07:45.619 "pending_bdev_io": 0, 00:07:45.619 "transports": [ 00:07:45.619 { 00:07:45.619 "trtype": "TCP" 00:07:45.619 } 00:07:45.619 ] 00:07:45.619 }, 00:07:45.619 { 00:07:45.619 "admin_qpairs": 1, 00:07:45.619 "completed_nvme_io": 167, 00:07:45.619 "current_admin_qpairs": 0, 00:07:45.619 "current_io_qpairs": 0, 00:07:45.619 "io_qpairs": 19, 00:07:45.619 "name": "nvmf_tgt_poll_group_002", 00:07:45.619 "pending_bdev_io": 0, 00:07:45.619 "transports": [ 00:07:45.619 { 00:07:45.619 "trtype": "TCP" 00:07:45.619 } 00:07:45.619 ] 00:07:45.619 }, 00:07:45.619 { 00:07:45.619 "admin_qpairs": 1, 00:07:45.619 "completed_nvme_io": 21, 00:07:45.619 "current_admin_qpairs": 0, 00:07:45.619 "current_io_qpairs": 0, 00:07:45.619 "io_qpairs": 18, 00:07:45.619 "name": "nvmf_tgt_poll_group_003", 00:07:45.619 "pending_bdev_io": 0, 00:07:45.619 "transports": [ 00:07:45.619 { 00:07:45.619 "trtype": "TCP" 00:07:45.619 } 00:07:45.619 ] 00:07:45.619 } 00:07:45.619 ], 00:07:45.619 "tick_rate": 2200000000 00:07:45.619 }' 00:07:45.619 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:45.619 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:45.619 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:45.619 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:45.877 rmmod nvme_tcp 00:07:45.877 rmmod nvme_fabrics 00:07:45.877 rmmod nvme_keyring 00:07:45.877 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 67273 ']' 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 67273 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67273 ']' 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67273 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67273 00:07:45.878 killing process with pid 67273 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67273' 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67273 00:07:45.878 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67273 00:07:46.136 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.136 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:46.136 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:46.136 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.136 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:46.136 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.136 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.136 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.136 14:24:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:46.136 ************************************ 00:07:46.136 END TEST nvmf_rpc 00:07:46.136 ************************************ 00:07:46.136 00:07:46.136 real 0m18.581s 00:07:46.136 user 1m9.574s 00:07:46.136 sys 0m2.735s 00:07:46.136 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.136 14:24:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.136 14:24:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:46.136 14:24:25 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:46.136 14:24:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:46.136 14:24:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.136 14:24:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.136 ************************************ 00:07:46.136 START TEST nvmf_invalid 00:07:46.136 ************************************ 00:07:46.136 14:24:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:46.136 * Looking for test storage... 00:07:46.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.395 14:24:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:46.396 Cannot find device "nvmf_tgt_br" 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:46.396 Cannot find device "nvmf_tgt_br2" 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:46.396 Cannot find device "nvmf_tgt_br" 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:46.396 Cannot find device "nvmf_tgt_br2" 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:46.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:46.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:46.396 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:46.654 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:46.654 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:46.654 14:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:46.654 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:46.654 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:46.654 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:46.654 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:46.654 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:46.654 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:46.654 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:46.654 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:46.654 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:46.654 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:46.654 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:46.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:07:46.655 00:07:46.655 --- 10.0.0.2 ping statistics --- 00:07:46.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.655 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:46.655 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:46.655 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:07:46.655 00:07:46.655 --- 10.0.0.3 ping statistics --- 00:07:46.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.655 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:46.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:07:46.655 00:07:46.655 --- 10.0.0.1 ping statistics --- 00:07:46.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.655 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:46.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67782 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67782 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 67782 ']' 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.655 14:24:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:46.655 [2024-07-15 14:24:26.201658] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:46.655 [2024-07-15 14:24:26.201816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.913 [2024-07-15 14:24:26.351868] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.913 [2024-07-15 14:24:26.425947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.913 [2024-07-15 14:24:26.426286] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.913 [2024-07-15 14:24:26.426459] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.913 [2024-07-15 14:24:26.426716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.913 [2024-07-15 14:24:26.426900] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.913 [2024-07-15 14:24:26.427066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.913 [2024-07-15 14:24:26.427146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.913 [2024-07-15 14:24:26.427266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.913 [2024-07-15 14:24:26.427275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.847 14:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.847 14:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:47.847 14:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:47.847 14:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:47.847 14:24:27 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:47.847 14:24:27 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.847 14:24:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:47.847 14:24:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30617 00:07:48.105 [2024-07-15 14:24:27.567043] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:48.105 14:24:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 14:24:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30617 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:48.106 request: 00:07:48.106 { 00:07:48.106 "method": "nvmf_create_subsystem", 00:07:48.106 "params": { 00:07:48.106 "nqn": "nqn.2016-06.io.spdk:cnode30617", 00:07:48.106 "tgt_name": "foobar" 00:07:48.106 } 00:07:48.106 } 00:07:48.106 Got JSON-RPC error response 00:07:48.106 GoRPCClient: error on JSON-RPC call' 00:07:48.106 14:24:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 14:24:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30617 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:48.106 request: 00:07:48.106 { 00:07:48.106 "method": "nvmf_create_subsystem", 00:07:48.106 "params": { 00:07:48.106 "nqn": "nqn.2016-06.io.spdk:cnode30617", 00:07:48.106 "tgt_name": "foobar" 00:07:48.106 } 00:07:48.106 } 00:07:48.106 Got JSON-RPC error response 00:07:48.106 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:48.106 14:24:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:48.106 14:24:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13298 00:07:48.363 [2024-07-15 14:24:27.815291] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13298: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:48.363 14:24:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 14:24:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13298 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:48.363 request: 00:07:48.363 { 00:07:48.363 "method": "nvmf_create_subsystem", 00:07:48.363 "params": { 00:07:48.363 "nqn": "nqn.2016-06.io.spdk:cnode13298", 00:07:48.363 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:48.363 } 00:07:48.363 } 00:07:48.363 Got JSON-RPC error response 00:07:48.363 GoRPCClient: error on JSON-RPC call' 00:07:48.364 14:24:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 14:24:27 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13298 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:48.364 request: 00:07:48.364 { 00:07:48.364 "method": "nvmf_create_subsystem", 00:07:48.364 "params": { 00:07:48.364 "nqn": "nqn.2016-06.io.spdk:cnode13298", 00:07:48.364 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:48.364 } 00:07:48.364 } 00:07:48.364 Got JSON-RPC error response 00:07:48.364 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:48.364 14:24:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:48.364 14:24:27 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15965 00:07:48.931 [2024-07-15 14:24:28.227625] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15965: invalid model number 'SPDK_Controller' 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 14:24:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode15965], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:48.931 request: 00:07:48.931 { 00:07:48.931 "method": "nvmf_create_subsystem", 00:07:48.931 "params": { 00:07:48.931 "nqn": "nqn.2016-06.io.spdk:cnode15965", 00:07:48.931 "model_number": "SPDK_Controller\u001f" 00:07:48.931 } 00:07:48.931 } 00:07:48.931 Got JSON-RPC error response 00:07:48.931 GoRPCClient: error on JSON-RPC call' 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 14:24:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode15965], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:48.931 request: 00:07:48.931 { 00:07:48.931 "method": "nvmf_create_subsystem", 00:07:48.931 "params": { 00:07:48.931 "nqn": "nqn.2016-06.io.spdk:cnode15965", 00:07:48.931 "model_number": "SPDK_Controller\u001f" 00:07:48.931 } 00:07:48.931 } 00:07:48.931 Got JSON-RPC error response 00:07:48.931 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:48.931 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ E == \- ]] 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'E^iESFc|L0>NOL]UYnbua' 00:07:48.932 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'E^iESFc|L0>NOL]UYnbua' nqn.2016-06.io.spdk:cnode31896 00:07:49.191 [2024-07-15 14:24:28.744095] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31896: invalid serial number 'E^iESFc|L0>NOL]UYnbua' 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/15 14:24:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31896 serial_number:E^iESFc|L0>NOL]UYnbua], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN E^iESFc|L0>NOL]UYnbua 00:07:49.191 request: 00:07:49.191 { 00:07:49.191 "method": "nvmf_create_subsystem", 00:07:49.191 "params": { 00:07:49.191 "nqn": "nqn.2016-06.io.spdk:cnode31896", 00:07:49.191 "serial_number": "E^iESFc|L0>NOL]UYnbua" 00:07:49.191 } 00:07:49.191 } 00:07:49.191 Got JSON-RPC error response 00:07:49.191 GoRPCClient: error on JSON-RPC call' 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/15 14:24:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode31896 serial_number:E^iESFc|L0>NOL]UYnbua], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN E^iESFc|L0>NOL]UYnbua 00:07:49.191 request: 00:07:49.191 { 00:07:49.191 "method": "nvmf_create_subsystem", 00:07:49.191 "params": { 00:07:49.191 "nqn": "nqn.2016-06.io.spdk:cnode31896", 00:07:49.191 "serial_number": "E^iESFc|L0>NOL]UYnbua" 00:07:49.191 } 00:07:49.191 } 00:07:49.191 Got JSON-RPC error response 00:07:49.191 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.191 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:49.450 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ w == \- ]] 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'w5e_?\70l<(?L-^~>#`O_p32OD|><[}hWiQq25K' 00:07:49.451 14:24:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'w5e_?\70l<(?L-^~>#`O_p32OD|><[}hWiQq25K' nqn.2016-06.io.spdk:cnode20801 00:07:49.709 [2024-07-15 14:24:29.236904] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20801: invalid model number 'w5e_?\70l<(?L-^~>#`O_p32OD|><[}hWiQq25K' 00:07:49.709 14:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/15 14:24:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:w5e_?\70l<(?L-^~>#`O_p32OD|><[}hWiQq25K nqn:nqn.2016-06.io.spdk:cnode20801], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN w5e_?\70l<(?L-^~>#`O_p32OD|><[}hWiQq25K 00:07:49.709 request: 00:07:49.709 { 00:07:49.709 "method": "nvmf_create_subsystem", 00:07:49.709 "params": { 00:07:49.709 "nqn": "nqn.2016-06.io.spdk:cnode20801", 00:07:49.709 "model_number": "w5e_?\\70l<(?L-^~>#`O_p32\u007fOD|><[}h\u007fWiQq25K" 00:07:49.709 } 00:07:49.709 } 00:07:49.709 Got JSON-RPC error response 00:07:49.709 GoRPCClient: error on JSON-RPC call' 00:07:49.709 14:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/15 14:24:29 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:w5e_?\70l<(?L-^~>#`O_p32OD|><[}hWiQq25K nqn:nqn.2016-06.io.spdk:cnode20801], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN w5e_?\70l<(?L-^~>#`O_p32OD|><[}hWiQq25K 00:07:49.709 request: 00:07:49.709 { 00:07:49.709 "method": "nvmf_create_subsystem", 00:07:49.709 "params": { 00:07:49.709 "nqn": "nqn.2016-06.io.spdk:cnode20801", 00:07:49.709 "model_number": "w5e_?\\70l<(?L-^~>#`O_p32\u007fOD|><[}h\u007fWiQq25K" 00:07:49.709 } 00:07:49.709 } 00:07:49.709 Got JSON-RPC error response 00:07:49.709 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:49.709 14:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:50.034 [2024-07-15 14:24:29.557748] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.034 14:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:50.306 14:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:50.306 14:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:50.306 14:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:50.306 14:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:50.306 14:24:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:50.565 [2024-07-15 14:24:30.099569] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:50.565 14:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/15 14:24:30 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:50.565 request: 00:07:50.565 { 00:07:50.565 "method": "nvmf_subsystem_remove_listener", 00:07:50.565 "params": { 00:07:50.565 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:50.565 "listen_address": { 00:07:50.565 "trtype": "tcp", 00:07:50.565 "traddr": "", 00:07:50.565 "trsvcid": "4421" 00:07:50.565 } 00:07:50.565 } 00:07:50.565 } 00:07:50.565 Got JSON-RPC error response 00:07:50.566 GoRPCClient: error on JSON-RPC call' 00:07:50.566 14:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/15 14:24:30 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:50.566 request: 00:07:50.566 { 00:07:50.566 "method": "nvmf_subsystem_remove_listener", 00:07:50.566 "params": { 00:07:50.566 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:50.566 "listen_address": { 00:07:50.566 "trtype": "tcp", 00:07:50.566 "traddr": "", 00:07:50.566 "trsvcid": "4421" 00:07:50.566 } 00:07:50.566 } 00:07:50.566 } 00:07:50.566 Got JSON-RPC error response 00:07:50.566 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:50.566 14:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20816 -i 0 00:07:51.132 [2024-07-15 14:24:30.439834] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20816: invalid cntlid range [0-65519] 00:07:51.132 14:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/15 14:24:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode20816], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:51.132 request: 00:07:51.132 { 00:07:51.132 "method": "nvmf_create_subsystem", 00:07:51.132 "params": { 00:07:51.132 "nqn": "nqn.2016-06.io.spdk:cnode20816", 00:07:51.132 "min_cntlid": 0 00:07:51.132 } 00:07:51.132 } 00:07:51.132 Got JSON-RPC error response 00:07:51.132 GoRPCClient: error on JSON-RPC call' 00:07:51.132 14:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/15 14:24:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode20816], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:51.132 request: 00:07:51.132 { 00:07:51.132 "method": "nvmf_create_subsystem", 00:07:51.132 "params": { 00:07:51.132 "nqn": "nqn.2016-06.io.spdk:cnode20816", 00:07:51.132 "min_cntlid": 0 00:07:51.132 } 00:07:51.132 } 00:07:51.132 Got JSON-RPC error response 00:07:51.132 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:51.132 14:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9328 -i 65520 00:07:51.389 [2024-07-15 14:24:30.840217] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9328: invalid cntlid range [65520-65519] 00:07:51.389 14:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/15 14:24:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode9328], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:51.390 request: 00:07:51.390 { 00:07:51.390 "method": "nvmf_create_subsystem", 00:07:51.390 "params": { 00:07:51.390 "nqn": "nqn.2016-06.io.spdk:cnode9328", 00:07:51.390 "min_cntlid": 65520 00:07:51.390 } 00:07:51.390 } 00:07:51.390 Got JSON-RPC error response 00:07:51.390 GoRPCClient: error on JSON-RPC call' 00:07:51.390 14:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/15 14:24:30 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode9328], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:51.390 request: 00:07:51.390 { 00:07:51.390 "method": "nvmf_create_subsystem", 00:07:51.390 "params": { 00:07:51.390 "nqn": "nqn.2016-06.io.spdk:cnode9328", 00:07:51.390 "min_cntlid": 65520 00:07:51.390 } 00:07:51.390 } 00:07:51.390 Got JSON-RPC error response 00:07:51.390 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:51.390 14:24:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13863 -I 0 00:07:51.646 [2024-07-15 14:24:31.100250] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13863: invalid cntlid range [1-0] 00:07:51.646 14:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/15 14:24:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode13863], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:51.646 request: 00:07:51.646 { 00:07:51.646 "method": "nvmf_create_subsystem", 00:07:51.646 "params": { 00:07:51.646 "nqn": "nqn.2016-06.io.spdk:cnode13863", 00:07:51.646 "max_cntlid": 0 00:07:51.646 } 00:07:51.646 } 00:07:51.646 Got JSON-RPC error response 00:07:51.646 GoRPCClient: error on JSON-RPC call' 00:07:51.646 14:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/15 14:24:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode13863], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:51.646 request: 00:07:51.646 { 00:07:51.646 "method": "nvmf_create_subsystem", 00:07:51.646 "params": { 00:07:51.646 "nqn": "nqn.2016-06.io.spdk:cnode13863", 00:07:51.646 "max_cntlid": 0 00:07:51.646 } 00:07:51.646 } 00:07:51.646 Got JSON-RPC error response 00:07:51.646 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:51.646 14:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28822 -I 65520 00:07:51.902 [2024-07-15 14:24:31.384532] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28822: invalid cntlid range [1-65520] 00:07:51.902 14:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/15 14:24:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode28822], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:51.902 request: 00:07:51.902 { 00:07:51.902 "method": "nvmf_create_subsystem", 00:07:51.902 "params": { 00:07:51.902 "nqn": "nqn.2016-06.io.spdk:cnode28822", 00:07:51.902 "max_cntlid": 65520 00:07:51.902 } 00:07:51.902 } 00:07:51.902 Got JSON-RPC error response 00:07:51.902 GoRPCClient: error on JSON-RPC call' 00:07:51.902 14:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/15 14:24:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode28822], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:51.902 request: 00:07:51.902 { 00:07:51.903 "method": "nvmf_create_subsystem", 00:07:51.903 "params": { 00:07:51.903 "nqn": "nqn.2016-06.io.spdk:cnode28822", 00:07:51.903 "max_cntlid": 65520 00:07:51.903 } 00:07:51.903 } 00:07:51.903 Got JSON-RPC error response 00:07:51.903 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:51.903 14:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27160 -i 6 -I 5 00:07:52.468 [2024-07-15 14:24:31.756924] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27160: invalid cntlid range [6-5] 00:07:52.468 14:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/15 14:24:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode27160], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:52.468 request: 00:07:52.468 { 00:07:52.468 "method": "nvmf_create_subsystem", 00:07:52.468 "params": { 00:07:52.468 "nqn": "nqn.2016-06.io.spdk:cnode27160", 00:07:52.468 "min_cntlid": 6, 00:07:52.468 "max_cntlid": 5 00:07:52.468 } 00:07:52.468 } 00:07:52.468 Got JSON-RPC error response 00:07:52.468 GoRPCClient: error on JSON-RPC call' 00:07:52.468 14:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/15 14:24:31 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode27160], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:52.468 request: 00:07:52.468 { 00:07:52.468 "method": "nvmf_create_subsystem", 00:07:52.468 "params": { 00:07:52.468 "nqn": "nqn.2016-06.io.spdk:cnode27160", 00:07:52.468 "min_cntlid": 6, 00:07:52.468 "max_cntlid": 5 00:07:52.468 } 00:07:52.468 } 00:07:52.468 Got JSON-RPC error response 00:07:52.468 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:52.468 14:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:52.468 14:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:52.468 { 00:07:52.468 "name": "foobar", 00:07:52.468 "method": "nvmf_delete_target", 00:07:52.468 "req_id": 1 00:07:52.468 } 00:07:52.468 Got JSON-RPC error response 00:07:52.468 response: 00:07:52.468 { 00:07:52.468 "code": -32602, 00:07:52.468 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:52.468 }' 00:07:52.468 14:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:52.468 { 00:07:52.468 "name": "foobar", 00:07:52.468 "method": "nvmf_delete_target", 00:07:52.468 "req_id": 1 00:07:52.468 } 00:07:52.468 Got JSON-RPC error response 00:07:52.468 response: 00:07:52.468 { 00:07:52.468 "code": -32602, 00:07:52.468 "message": "The specified target doesn't exist, cannot delete it." 00:07:52.468 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:52.468 14:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:52.468 14:24:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:52.468 14:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:52.468 14:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:52.468 14:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:52.468 14:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:52.468 14:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:52.468 14:24:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:52.468 rmmod nvme_tcp 00:07:52.468 rmmod nvme_fabrics 00:07:52.468 rmmod nvme_keyring 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 67782 ']' 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 67782 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 67782 ']' 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 67782 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67782 00:07:52.468 killing process with pid 67782 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67782' 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 67782 00:07:52.468 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 67782 00:07:52.726 14:24:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:52.726 14:24:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:52.726 14:24:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:52.726 14:24:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:52.726 14:24:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:52.726 14:24:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.726 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.726 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.726 14:24:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:52.726 ************************************ 00:07:52.726 END TEST nvmf_invalid 00:07:52.726 ************************************ 00:07:52.726 00:07:52.726 real 0m6.579s 00:07:52.726 user 0m27.062s 00:07:52.726 sys 0m1.223s 00:07:52.726 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.726 14:24:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:52.726 14:24:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:52.726 14:24:32 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:52.726 14:24:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:52.726 14:24:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.726 14:24:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.727 ************************************ 00:07:52.727 START TEST nvmf_abort 00:07:52.727 ************************************ 00:07:52.727 14:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:52.985 * Looking for test storage... 00:07:52.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:52.985 Cannot find device "nvmf_tgt_br" 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:52.985 Cannot find device "nvmf_tgt_br2" 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:52.985 Cannot find device "nvmf_tgt_br" 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:52.985 Cannot find device "nvmf_tgt_br2" 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:52.985 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:52.985 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:52.985 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:53.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:07:53.243 00:07:53.243 --- 10.0.0.2 ping statistics --- 00:07:53.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.243 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:53.243 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:53.243 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:07:53.243 00:07:53.243 --- 10.0.0.3 ping statistics --- 00:07:53.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.243 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:53.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:07:53.243 00:07:53.243 --- 10.0.0.1 ping statistics --- 00:07:53.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.243 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:53.243 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=68295 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 68295 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68295 ']' 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.244 14:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.244 [2024-07-15 14:24:32.793629] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:53.244 [2024-07-15 14:24:32.793771] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.502 [2024-07-15 14:24:32.937775] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.502 [2024-07-15 14:24:32.996178] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.502 [2024-07-15 14:24:32.996228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.502 [2024-07-15 14:24:32.996238] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.502 [2024-07-15 14:24:32.996246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.503 [2024-07-15 14:24:32.996253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.503 [2024-07-15 14:24:32.996323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.503 [2024-07-15 14:24:32.996403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.503 [2024-07-15 14:24:32.996413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.503 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.503 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:53.503 14:24:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:53.503 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:53.503 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.760 [2024-07-15 14:24:33.119779] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.760 Malloc0 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.760 Delay0 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.760 [2024-07-15 14:24:33.183518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.760 14:24:33 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:54.017 [2024-07-15 14:24:33.359421] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:55.938 Initializing NVMe Controllers 00:07:55.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:55.938 controller IO queue size 128 less than required 00:07:55.938 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:55.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:55.938 Initialization complete. Launching workers. 00:07:55.938 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32628 00:07:55.938 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32689, failed to submit 62 00:07:55.938 success 32632, unsuccess 57, failed 0 00:07:55.938 14:24:35 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:55.938 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.938 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:55.938 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.938 14:24:35 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:55.938 14:24:35 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:55.938 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:55.938 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:55.939 rmmod nvme_tcp 00:07:55.939 rmmod nvme_fabrics 00:07:55.939 rmmod nvme_keyring 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 68295 ']' 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 68295 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68295 ']' 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68295 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68295 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:55.939 killing process with pid 68295 00:07:55.939 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68295' 00:07:56.197 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68295 00:07:56.197 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68295 00:07:56.197 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:56.197 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:56.197 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:56.197 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:56.197 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:56.197 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.197 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.197 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.197 14:24:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:56.197 00:07:56.197 real 0m3.463s 00:07:56.197 user 0m9.895s 00:07:56.197 sys 0m0.877s 00:07:56.197 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.197 ************************************ 00:07:56.197 END TEST nvmf_abort 00:07:56.197 14:24:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.197 ************************************ 00:07:56.197 14:24:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:56.197 14:24:35 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:56.197 14:24:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:56.197 14:24:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.197 14:24:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:56.197 ************************************ 00:07:56.197 START TEST nvmf_ns_hotplug_stress 00:07:56.197 ************************************ 00:07:56.197 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:56.455 * Looking for test storage... 00:07:56.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:56.455 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:56.456 Cannot find device "nvmf_tgt_br" 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:56.456 Cannot find device "nvmf_tgt_br2" 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:56.456 Cannot find device "nvmf_tgt_br" 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:56.456 Cannot find device "nvmf_tgt_br2" 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:56.456 14:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:56.456 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:56.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.456 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:07:56.456 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:56.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.456 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:07:56.456 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:56.456 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:56.456 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:56.456 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:56.714 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:56.714 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:56.714 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:56.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:07:56.715 00:07:56.715 --- 10.0.0.2 ping statistics --- 00:07:56.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.715 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:56.715 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:56.715 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:07:56.715 00:07:56.715 --- 10.0.0.3 ping statistics --- 00:07:56.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.715 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:56.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:07:56.715 00:07:56.715 --- 10.0.0.1 ping statistics --- 00:07:56.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.715 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68523 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68523 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68523 ']' 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.715 14:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:56.973 [2024-07-15 14:24:36.326225] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:07:56.973 [2024-07-15 14:24:36.326327] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.973 [2024-07-15 14:24:36.467015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.973 [2024-07-15 14:24:36.542153] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.973 [2024-07-15 14:24:36.542211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.973 [2024-07-15 14:24:36.542223] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.973 [2024-07-15 14:24:36.542231] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.973 [2024-07-15 14:24:36.542239] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.973 [2024-07-15 14:24:36.542368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.973 [2024-07-15 14:24:36.543012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.973 [2024-07-15 14:24:36.543024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.344 14:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.344 14:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:58.344 14:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:58.344 14:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.344 14:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:58.344 14:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.344 14:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:58.344 14:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:58.344 [2024-07-15 14:24:37.936435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.601 14:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:58.859 14:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.423 [2024-07-15 14:24:38.802746] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.423 14:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:59.681 14:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:59.938 Malloc0 00:07:59.938 14:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:00.196 Delay0 00:08:00.196 14:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.454 14:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:00.712 NULL1 00:08:00.712 14:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:00.970 14:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68667 00:08:00.970 14:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:00.970 14:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:00.970 14:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.341 14:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.341 Read completed with error (sct=0, sc=11) 00:08:02.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.599 14:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:02.599 14:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:02.857 true 00:08:02.857 14:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:02.857 14:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.791 14:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.791 14:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:03.791 14:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:04.050 true 00:08:04.308 14:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:04.308 14:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.567 14:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.825 14:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:04.825 14:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:05.084 true 00:08:05.084 14:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:05.084 14:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.651 14:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.909 14:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:05.909 14:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:06.168 true 00:08:06.168 14:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:06.168 14:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.426 14:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.684 14:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:06.684 14:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:06.945 true 00:08:06.945 14:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:06.945 14:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.203 14:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.462 14:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:07.462 14:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:07.720 true 00:08:07.720 14:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:07.720 14:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.657 14:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.915 14:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:08.915 14:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:09.173 true 00:08:09.173 14:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:09.173 14:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.431 14:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.688 14:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:09.688 14:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:09.947 true 00:08:09.947 14:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:09.947 14:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.206 14:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.465 14:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:10.465 14:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:10.724 true 00:08:10.983 14:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:10.983 14:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.550 14:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.065 14:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:12.065 14:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:12.322 true 00:08:12.322 14:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:12.322 14:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.579 14:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.836 14:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:12.836 14:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:13.094 true 00:08:13.094 14:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:13.094 14:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.352 14:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.610 14:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:13.610 14:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:13.867 true 00:08:13.867 14:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:13.867 14:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.799 14:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.056 14:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:15.056 14:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:15.314 true 00:08:15.314 14:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:15.314 14:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.572 14:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.830 14:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:15.830 14:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:16.088 true 00:08:16.088 14:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:16.088 14:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.346 14:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.603 14:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:16.603 14:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:16.861 true 00:08:17.123 14:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:17.123 14:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.384 14:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.645 14:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:17.645 14:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:17.904 true 00:08:17.904 14:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:17.904 14:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.839 14:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.097 14:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:19.097 14:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:19.356 true 00:08:19.356 14:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:19.356 14:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.614 14:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.871 14:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:19.871 14:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:20.129 true 00:08:20.129 14:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:20.129 14:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.387 14:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.645 14:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:20.645 14:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:20.954 true 00:08:20.954 14:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:20.954 14:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.889 14:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.889 14:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:21.889 14:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:22.453 true 00:08:22.453 14:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:22.453 14:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.453 14:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.018 14:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:23.018 14:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:23.277 true 00:08:23.277 14:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:23.277 14:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.534 14:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.791 14:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:23.791 14:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:24.048 true 00:08:24.307 14:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:24.307 14:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.566 14:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.824 14:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:24.824 14:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:25.084 true 00:08:25.084 14:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:25.084 14:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.342 14:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.601 14:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:25.601 14:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:25.859 true 00:08:25.859 14:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:25.859 14:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.793 14:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.051 14:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:27.051 14:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:27.310 true 00:08:27.310 14:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:27.310 14:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.568 14:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.827 14:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:27.827 14:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:28.086 true 00:08:28.086 14:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:28.086 14:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.344 14:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.602 14:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:28.602 14:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:28.861 true 00:08:28.861 14:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:28.861 14:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.797 14:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.056 14:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:30.056 14:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:30.314 true 00:08:30.314 14:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:30.314 14:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.571 14:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.828 14:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:30.828 14:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:31.086 true 00:08:31.086 14:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:31.086 14:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.344 14:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.344 Initializing NVMe Controllers 00:08:31.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:31.344 Controller IO queue size 128, less than required. 00:08:31.344 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.344 Controller IO queue size 128, less than required. 00:08:31.344 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:31.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:31.344 Initialization complete. Launching workers. 00:08:31.344 ======================================================== 00:08:31.344 Latency(us) 00:08:31.344 Device Information : IOPS MiB/s Average min max 00:08:31.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 229.61 0.11 185656.11 3366.98 1218784.76 00:08:31.344 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6747.57 3.29 18970.94 3068.20 688105.59 00:08:31.344 ======================================================== 00:08:31.344 Total : 6977.18 3.41 24456.29 3068.20 1218784.76 00:08:31.344 00:08:31.344 14:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:31.344 14:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:31.908 true 00:08:31.908 14:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68667 00:08:31.908 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68667) - No such process 00:08:31.908 14:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68667 00:08:31.908 14:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.908 14:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.471 14:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:32.471 14:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:32.471 14:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:32.471 14:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.471 14:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:32.471 null0 00:08:32.728 14:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.728 14:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.728 14:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:32.985 null1 00:08:32.985 14:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:32.985 14:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.985 14:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:33.242 null2 00:08:33.242 14:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.242 14:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.242 14:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:33.500 null3 00:08:33.500 14:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.500 14:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.500 14:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:33.758 null4 00:08:33.758 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.758 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.758 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:34.016 null5 00:08:34.016 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.016 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.016 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:34.273 null6 00:08:34.273 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.273 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.273 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:34.531 null7 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.531 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.532 14:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69726 69727 69730 69732 69734 69735 69737 69739 00:08:34.789 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.789 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:34.789 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:34.789 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:34.789 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.789 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:34.789 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.116 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.383 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.383 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.383 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.383 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.383 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.383 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.383 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.383 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.383 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.383 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.383 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.383 14:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.641 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.899 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.899 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.899 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.899 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.899 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.899 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.899 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.899 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.899 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.899 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.899 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.156 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.156 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.156 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.156 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.156 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.156 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.156 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.156 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.156 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.413 14:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.671 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.671 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.671 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.671 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.671 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.671 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.929 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.187 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.187 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.187 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.187 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.187 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.187 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.187 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.187 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.187 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.187 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.445 14:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.702 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.702 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.703 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.703 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.703 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.703 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.703 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.703 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.703 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.703 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.703 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.703 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.703 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.703 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.960 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.218 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.218 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.218 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.218 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.218 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.218 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.218 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.218 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.218 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.218 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.218 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.218 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.218 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.476 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.476 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.476 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.476 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.476 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.476 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.476 14:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.476 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.476 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.476 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.476 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.476 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.476 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.476 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.476 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.476 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.735 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.735 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.735 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.735 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.735 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.735 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.735 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.735 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.735 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.735 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.735 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.993 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.993 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.993 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.993 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.993 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.993 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.993 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.993 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.993 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.993 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.993 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.250 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.250 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.250 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.250 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.250 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.250 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.250 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.250 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.250 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.250 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.250 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.250 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.250 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.251 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.251 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.251 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.251 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.251 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.251 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.251 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.508 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.508 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.508 14:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.508 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.508 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.508 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.508 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.766 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:40.024 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.024 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.024 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:40.024 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.024 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.024 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:40.024 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:40.024 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.024 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.024 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.024 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:40.024 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.283 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.283 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.283 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.283 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:40.283 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:40.283 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:40.283 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.283 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.283 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.283 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.283 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:40.283 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.283 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.541 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:40.541 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.541 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.541 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.541 14:25:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.541 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.799 rmmod nvme_tcp 00:08:40.799 rmmod nvme_fabrics 00:08:40.799 rmmod nvme_keyring 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68523 ']' 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68523 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68523 ']' 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68523 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68523 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:40.800 killing process with pid 68523 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68523' 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68523 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68523 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.800 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.058 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:41.058 00:08:41.058 real 0m44.617s 00:08:41.058 user 3m39.241s 00:08:41.058 sys 0m12.815s 00:08:41.058 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.058 14:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.058 ************************************ 00:08:41.058 END TEST nvmf_ns_hotplug_stress 00:08:41.058 ************************************ 00:08:41.058 14:25:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:41.058 14:25:20 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:41.058 14:25:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:41.058 14:25:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.059 14:25:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:41.059 ************************************ 00:08:41.059 START TEST nvmf_connect_stress 00:08:41.059 ************************************ 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:41.059 * Looking for test storage... 00:08:41.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:41.059 Cannot find device "nvmf_tgt_br" 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:41.059 Cannot find device "nvmf_tgt_br2" 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:41.059 Cannot find device "nvmf_tgt_br" 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:41.059 Cannot find device "nvmf_tgt_br2" 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:08:41.059 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:41.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:41.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:41.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:08:41.318 00:08:41.318 --- 10.0.0.2 ping statistics --- 00:08:41.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.318 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:41.318 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:41.318 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:08:41.318 00:08:41.318 --- 10.0.0.3 ping statistics --- 00:08:41.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.318 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:41.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:08:41.318 00:08:41.318 --- 10.0.0.1 ping statistics --- 00:08:41.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.318 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.318 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.576 14:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:41.576 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.577 14:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:41.577 14:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.577 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=71048 00:08:41.577 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 71048 00:08:41.577 14:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 71048 ']' 00:08:41.577 14:25:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:41.577 14:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.577 14:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.577 14:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.577 14:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.577 14:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:41.577 [2024-07-15 14:25:20.983075] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:08:41.577 [2024-07-15 14:25:20.983160] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.577 [2024-07-15 14:25:21.126862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:41.835 [2024-07-15 14:25:21.195029] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.835 [2024-07-15 14:25:21.195083] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.835 [2024-07-15 14:25:21.195097] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.835 [2024-07-15 14:25:21.195107] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.835 [2024-07-15 14:25:21.195116] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.835 [2024-07-15 14:25:21.195439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.835 [2024-07-15 14:25:21.196006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.835 [2024-07-15 14:25:21.196055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.401 14:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.401 14:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:42.401 14:25:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.401 14:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:42.401 14:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.659 [2024-07-15 14:25:22.017204] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.659 [2024-07-15 14:25:22.034542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.659 NULL1 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=71100 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.659 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.918 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.918 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:42.918 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.918 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.918 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.176 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.176 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:43.176 14:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.176 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.176 14:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:43.744 14:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.744 14:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:43.744 14:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:43.744 14:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.744 14:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.001 14:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.001 14:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:44.001 14:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.001 14:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.001 14:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.259 14:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.259 14:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:44.259 14:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.259 14:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.259 14:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.516 14:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.516 14:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:44.516 14:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.516 14:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.516 14:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:44.777 14:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.777 14:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:44.777 14:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:44.777 14:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.777 14:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.354 14:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.354 14:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:45.354 14:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.354 14:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.354 14:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.612 14:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.613 14:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:45.613 14:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.613 14:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.613 14:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.918 14:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.918 14:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:45.918 14:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.918 14:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.918 14:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.182 14:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.182 14:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:46.182 14:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.182 14:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.182 14:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.440 14:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.440 14:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:46.440 14:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.440 14:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.440 14:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.007 14:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.007 14:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:47.007 14:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.007 14:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.007 14:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.265 14:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.265 14:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:47.265 14:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.265 14:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.265 14:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.524 14:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.524 14:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:47.524 14:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.524 14:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.524 14:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.782 14:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.782 14:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:47.782 14:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.782 14:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.782 14:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.040 14:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.040 14:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:48.040 14:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.040 14:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.040 14:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.606 14:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.606 14:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:48.606 14:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.606 14:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.606 14:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.865 14:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.865 14:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:48.865 14:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.865 14:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.865 14:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.124 14:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.124 14:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:49.124 14:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.124 14:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.124 14:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.382 14:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.382 14:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:49.382 14:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.382 14:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.382 14:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.641 14:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.641 14:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:49.641 14:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.641 14:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.641 14:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.207 14:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.207 14:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:50.207 14:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.207 14:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.207 14:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.465 14:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.465 14:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:50.465 14:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.465 14:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.465 14:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.724 14:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.724 14:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:50.724 14:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.724 14:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.724 14:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.982 14:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.982 14:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:50.982 14:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.982 14:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.982 14:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.240 14:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.240 14:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:51.240 14:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.240 14:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.240 14:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.806 14:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.806 14:25:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:51.806 14:25:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.806 14:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.806 14:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.064 14:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.064 14:25:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:52.064 14:25:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.064 14:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.064 14:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.321 14:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.321 14:25:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:52.321 14:25:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.321 14:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.322 14:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.580 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.580 14:25:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:52.580 14:25:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.580 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.580 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.968 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71100 00:08:52.968 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71100) - No such process 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 71100 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:52.968 rmmod nvme_tcp 00:08:52.968 rmmod nvme_fabrics 00:08:52.968 rmmod nvme_keyring 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 71048 ']' 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 71048 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 71048 ']' 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 71048 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71048 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:52.968 killing process with pid 71048 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71048' 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 71048 00:08:52.968 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 71048 00:08:53.254 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:53.254 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:53.254 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:53.254 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.254 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:53.255 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.255 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.255 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.255 14:25:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:53.255 00:08:53.255 real 0m12.260s 00:08:53.255 user 0m40.841s 00:08:53.255 sys 0m3.275s 00:08:53.255 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:53.255 14:25:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.255 ************************************ 00:08:53.255 END TEST nvmf_connect_stress 00:08:53.255 ************************************ 00:08:53.255 14:25:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:53.255 14:25:32 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:53.255 14:25:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:53.255 14:25:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.255 14:25:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:53.255 ************************************ 00:08:53.255 START TEST nvmf_fused_ordering 00:08:53.255 ************************************ 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:53.255 * Looking for test storage... 00:08:53.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.255 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:53.514 Cannot find device "nvmf_tgt_br" 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.514 Cannot find device "nvmf_tgt_br2" 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:53.514 Cannot find device "nvmf_tgt_br" 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:53.514 Cannot find device "nvmf_tgt_br2" 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:53.514 14:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:53.514 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:53.514 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:53.514 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:53.514 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:53.514 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:53.514 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:53.514 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:53.514 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:53.514 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:53.514 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:53.514 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:53.772 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:53.772 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:53.772 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:53.772 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:53.772 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:53.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:08:53.773 00:08:53.773 --- 10.0.0.2 ping statistics --- 00:08:53.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.773 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:53.773 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:53.773 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:08:53.773 00:08:53.773 --- 10.0.0.3 ping statistics --- 00:08:53.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.773 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:53.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:53.773 00:08:53.773 --- 10.0.0.1 ping statistics --- 00:08:53.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.773 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:53.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71425 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71425 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71425 ']' 00:08:53.773 14:25:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:53.774 14:25:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.774 14:25:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.774 14:25:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.774 14:25:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.774 14:25:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:53.774 [2024-07-15 14:25:33.267520] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:08:53.774 [2024-07-15 14:25:33.267633] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.032 [2024-07-15 14:25:33.405108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.032 [2024-07-15 14:25:33.463357] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.032 [2024-07-15 14:25:33.463415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.032 [2024-07-15 14:25:33.463427] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.032 [2024-07-15 14:25:33.463435] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.032 [2024-07-15 14:25:33.463443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.032 [2024-07-15 14:25:33.463483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.965 [2024-07-15 14:25:34.263224] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.965 [2024-07-15 14:25:34.279299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.965 NULL1 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.965 14:25:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:54.965 [2024-07-15 14:25:34.330364] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:08:54.965 [2024-07-15 14:25:34.330403] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71475 ] 00:08:55.224 Attached to nqn.2016-06.io.spdk:cnode1 00:08:55.224 Namespace ID: 1 size: 1GB 00:08:55.224 fused_ordering(0) 00:08:55.224 fused_ordering(1) 00:08:55.224 fused_ordering(2) 00:08:55.224 fused_ordering(3) 00:08:55.224 fused_ordering(4) 00:08:55.224 fused_ordering(5) 00:08:55.224 fused_ordering(6) 00:08:55.224 fused_ordering(7) 00:08:55.224 fused_ordering(8) 00:08:55.224 fused_ordering(9) 00:08:55.224 fused_ordering(10) 00:08:55.224 fused_ordering(11) 00:08:55.224 fused_ordering(12) 00:08:55.224 fused_ordering(13) 00:08:55.224 fused_ordering(14) 00:08:55.224 fused_ordering(15) 00:08:55.224 fused_ordering(16) 00:08:55.224 fused_ordering(17) 00:08:55.224 fused_ordering(18) 00:08:55.224 fused_ordering(19) 00:08:55.224 fused_ordering(20) 00:08:55.224 fused_ordering(21) 00:08:55.224 fused_ordering(22) 00:08:55.224 fused_ordering(23) 00:08:55.224 fused_ordering(24) 00:08:55.224 fused_ordering(25) 00:08:55.224 fused_ordering(26) 00:08:55.224 fused_ordering(27) 00:08:55.224 fused_ordering(28) 00:08:55.224 fused_ordering(29) 00:08:55.224 fused_ordering(30) 00:08:55.224 fused_ordering(31) 00:08:55.224 fused_ordering(32) 00:08:55.224 fused_ordering(33) 00:08:55.224 fused_ordering(34) 00:08:55.224 fused_ordering(35) 00:08:55.224 fused_ordering(36) 00:08:55.224 fused_ordering(37) 00:08:55.224 fused_ordering(38) 00:08:55.224 fused_ordering(39) 00:08:55.224 fused_ordering(40) 00:08:55.224 fused_ordering(41) 00:08:55.224 fused_ordering(42) 00:08:55.224 fused_ordering(43) 00:08:55.224 fused_ordering(44) 00:08:55.224 fused_ordering(45) 00:08:55.224 fused_ordering(46) 00:08:55.224 fused_ordering(47) 00:08:55.224 fused_ordering(48) 00:08:55.224 fused_ordering(49) 00:08:55.224 fused_ordering(50) 00:08:55.224 fused_ordering(51) 00:08:55.224 fused_ordering(52) 00:08:55.224 fused_ordering(53) 00:08:55.224 fused_ordering(54) 00:08:55.224 fused_ordering(55) 00:08:55.224 fused_ordering(56) 00:08:55.224 fused_ordering(57) 00:08:55.224 fused_ordering(58) 00:08:55.224 fused_ordering(59) 00:08:55.224 fused_ordering(60) 00:08:55.224 fused_ordering(61) 00:08:55.224 fused_ordering(62) 00:08:55.224 fused_ordering(63) 00:08:55.224 fused_ordering(64) 00:08:55.224 fused_ordering(65) 00:08:55.224 fused_ordering(66) 00:08:55.224 fused_ordering(67) 00:08:55.224 fused_ordering(68) 00:08:55.224 fused_ordering(69) 00:08:55.224 fused_ordering(70) 00:08:55.224 fused_ordering(71) 00:08:55.224 fused_ordering(72) 00:08:55.224 fused_ordering(73) 00:08:55.224 fused_ordering(74) 00:08:55.224 fused_ordering(75) 00:08:55.224 fused_ordering(76) 00:08:55.224 fused_ordering(77) 00:08:55.224 fused_ordering(78) 00:08:55.224 fused_ordering(79) 00:08:55.224 fused_ordering(80) 00:08:55.224 fused_ordering(81) 00:08:55.224 fused_ordering(82) 00:08:55.224 fused_ordering(83) 00:08:55.224 fused_ordering(84) 00:08:55.224 fused_ordering(85) 00:08:55.224 fused_ordering(86) 00:08:55.224 fused_ordering(87) 00:08:55.224 fused_ordering(88) 00:08:55.224 fused_ordering(89) 00:08:55.224 fused_ordering(90) 00:08:55.224 fused_ordering(91) 00:08:55.224 fused_ordering(92) 00:08:55.224 fused_ordering(93) 00:08:55.224 fused_ordering(94) 00:08:55.224 fused_ordering(95) 00:08:55.224 fused_ordering(96) 00:08:55.224 fused_ordering(97) 00:08:55.224 fused_ordering(98) 00:08:55.224 fused_ordering(99) 00:08:55.224 fused_ordering(100) 00:08:55.224 fused_ordering(101) 00:08:55.224 fused_ordering(102) 00:08:55.224 fused_ordering(103) 00:08:55.224 fused_ordering(104) 00:08:55.224 fused_ordering(105) 00:08:55.224 fused_ordering(106) 00:08:55.224 fused_ordering(107) 00:08:55.224 fused_ordering(108) 00:08:55.224 fused_ordering(109) 00:08:55.224 fused_ordering(110) 00:08:55.224 fused_ordering(111) 00:08:55.224 fused_ordering(112) 00:08:55.224 fused_ordering(113) 00:08:55.224 fused_ordering(114) 00:08:55.224 fused_ordering(115) 00:08:55.224 fused_ordering(116) 00:08:55.224 fused_ordering(117) 00:08:55.224 fused_ordering(118) 00:08:55.224 fused_ordering(119) 00:08:55.224 fused_ordering(120) 00:08:55.224 fused_ordering(121) 00:08:55.224 fused_ordering(122) 00:08:55.224 fused_ordering(123) 00:08:55.224 fused_ordering(124) 00:08:55.224 fused_ordering(125) 00:08:55.224 fused_ordering(126) 00:08:55.224 fused_ordering(127) 00:08:55.224 fused_ordering(128) 00:08:55.224 fused_ordering(129) 00:08:55.224 fused_ordering(130) 00:08:55.224 fused_ordering(131) 00:08:55.224 fused_ordering(132) 00:08:55.224 fused_ordering(133) 00:08:55.224 fused_ordering(134) 00:08:55.224 fused_ordering(135) 00:08:55.224 fused_ordering(136) 00:08:55.224 fused_ordering(137) 00:08:55.224 fused_ordering(138) 00:08:55.224 fused_ordering(139) 00:08:55.224 fused_ordering(140) 00:08:55.224 fused_ordering(141) 00:08:55.224 fused_ordering(142) 00:08:55.224 fused_ordering(143) 00:08:55.224 fused_ordering(144) 00:08:55.224 fused_ordering(145) 00:08:55.224 fused_ordering(146) 00:08:55.224 fused_ordering(147) 00:08:55.224 fused_ordering(148) 00:08:55.224 fused_ordering(149) 00:08:55.224 fused_ordering(150) 00:08:55.224 fused_ordering(151) 00:08:55.224 fused_ordering(152) 00:08:55.224 fused_ordering(153) 00:08:55.224 fused_ordering(154) 00:08:55.224 fused_ordering(155) 00:08:55.224 fused_ordering(156) 00:08:55.224 fused_ordering(157) 00:08:55.224 fused_ordering(158) 00:08:55.224 fused_ordering(159) 00:08:55.224 fused_ordering(160) 00:08:55.224 fused_ordering(161) 00:08:55.224 fused_ordering(162) 00:08:55.224 fused_ordering(163) 00:08:55.224 fused_ordering(164) 00:08:55.224 fused_ordering(165) 00:08:55.224 fused_ordering(166) 00:08:55.224 fused_ordering(167) 00:08:55.224 fused_ordering(168) 00:08:55.224 fused_ordering(169) 00:08:55.224 fused_ordering(170) 00:08:55.224 fused_ordering(171) 00:08:55.224 fused_ordering(172) 00:08:55.224 fused_ordering(173) 00:08:55.224 fused_ordering(174) 00:08:55.224 fused_ordering(175) 00:08:55.224 fused_ordering(176) 00:08:55.224 fused_ordering(177) 00:08:55.224 fused_ordering(178) 00:08:55.224 fused_ordering(179) 00:08:55.224 fused_ordering(180) 00:08:55.224 fused_ordering(181) 00:08:55.224 fused_ordering(182) 00:08:55.224 fused_ordering(183) 00:08:55.224 fused_ordering(184) 00:08:55.224 fused_ordering(185) 00:08:55.224 fused_ordering(186) 00:08:55.224 fused_ordering(187) 00:08:55.224 fused_ordering(188) 00:08:55.224 fused_ordering(189) 00:08:55.224 fused_ordering(190) 00:08:55.224 fused_ordering(191) 00:08:55.224 fused_ordering(192) 00:08:55.224 fused_ordering(193) 00:08:55.225 fused_ordering(194) 00:08:55.225 fused_ordering(195) 00:08:55.225 fused_ordering(196) 00:08:55.225 fused_ordering(197) 00:08:55.225 fused_ordering(198) 00:08:55.225 fused_ordering(199) 00:08:55.225 fused_ordering(200) 00:08:55.225 fused_ordering(201) 00:08:55.225 fused_ordering(202) 00:08:55.225 fused_ordering(203) 00:08:55.225 fused_ordering(204) 00:08:55.225 fused_ordering(205) 00:08:55.792 fused_ordering(206) 00:08:55.792 fused_ordering(207) 00:08:55.792 fused_ordering(208) 00:08:55.792 fused_ordering(209) 00:08:55.792 fused_ordering(210) 00:08:55.792 fused_ordering(211) 00:08:55.792 fused_ordering(212) 00:08:55.792 fused_ordering(213) 00:08:55.792 fused_ordering(214) 00:08:55.792 fused_ordering(215) 00:08:55.792 fused_ordering(216) 00:08:55.792 fused_ordering(217) 00:08:55.792 fused_ordering(218) 00:08:55.792 fused_ordering(219) 00:08:55.792 fused_ordering(220) 00:08:55.792 fused_ordering(221) 00:08:55.792 fused_ordering(222) 00:08:55.792 fused_ordering(223) 00:08:55.792 fused_ordering(224) 00:08:55.792 fused_ordering(225) 00:08:55.792 fused_ordering(226) 00:08:55.792 fused_ordering(227) 00:08:55.792 fused_ordering(228) 00:08:55.792 fused_ordering(229) 00:08:55.792 fused_ordering(230) 00:08:55.792 fused_ordering(231) 00:08:55.792 fused_ordering(232) 00:08:55.792 fused_ordering(233) 00:08:55.792 fused_ordering(234) 00:08:55.792 fused_ordering(235) 00:08:55.792 fused_ordering(236) 00:08:55.792 fused_ordering(237) 00:08:55.792 fused_ordering(238) 00:08:55.792 fused_ordering(239) 00:08:55.792 fused_ordering(240) 00:08:55.792 fused_ordering(241) 00:08:55.792 fused_ordering(242) 00:08:55.792 fused_ordering(243) 00:08:55.792 fused_ordering(244) 00:08:55.792 fused_ordering(245) 00:08:55.792 fused_ordering(246) 00:08:55.792 fused_ordering(247) 00:08:55.792 fused_ordering(248) 00:08:55.792 fused_ordering(249) 00:08:55.792 fused_ordering(250) 00:08:55.792 fused_ordering(251) 00:08:55.792 fused_ordering(252) 00:08:55.792 fused_ordering(253) 00:08:55.792 fused_ordering(254) 00:08:55.792 fused_ordering(255) 00:08:55.792 fused_ordering(256) 00:08:55.792 fused_ordering(257) 00:08:55.792 fused_ordering(258) 00:08:55.792 fused_ordering(259) 00:08:55.792 fused_ordering(260) 00:08:55.792 fused_ordering(261) 00:08:55.792 fused_ordering(262) 00:08:55.792 fused_ordering(263) 00:08:55.792 fused_ordering(264) 00:08:55.792 fused_ordering(265) 00:08:55.792 fused_ordering(266) 00:08:55.792 fused_ordering(267) 00:08:55.792 fused_ordering(268) 00:08:55.792 fused_ordering(269) 00:08:55.792 fused_ordering(270) 00:08:55.792 fused_ordering(271) 00:08:55.792 fused_ordering(272) 00:08:55.792 fused_ordering(273) 00:08:55.792 fused_ordering(274) 00:08:55.792 fused_ordering(275) 00:08:55.792 fused_ordering(276) 00:08:55.792 fused_ordering(277) 00:08:55.792 fused_ordering(278) 00:08:55.792 fused_ordering(279) 00:08:55.792 fused_ordering(280) 00:08:55.792 fused_ordering(281) 00:08:55.792 fused_ordering(282) 00:08:55.792 fused_ordering(283) 00:08:55.792 fused_ordering(284) 00:08:55.792 fused_ordering(285) 00:08:55.792 fused_ordering(286) 00:08:55.792 fused_ordering(287) 00:08:55.792 fused_ordering(288) 00:08:55.792 fused_ordering(289) 00:08:55.792 fused_ordering(290) 00:08:55.792 fused_ordering(291) 00:08:55.792 fused_ordering(292) 00:08:55.792 fused_ordering(293) 00:08:55.792 fused_ordering(294) 00:08:55.792 fused_ordering(295) 00:08:55.792 fused_ordering(296) 00:08:55.792 fused_ordering(297) 00:08:55.792 fused_ordering(298) 00:08:55.792 fused_ordering(299) 00:08:55.792 fused_ordering(300) 00:08:55.792 fused_ordering(301) 00:08:55.792 fused_ordering(302) 00:08:55.792 fused_ordering(303) 00:08:55.792 fused_ordering(304) 00:08:55.792 fused_ordering(305) 00:08:55.792 fused_ordering(306) 00:08:55.792 fused_ordering(307) 00:08:55.792 fused_ordering(308) 00:08:55.792 fused_ordering(309) 00:08:55.792 fused_ordering(310) 00:08:55.792 fused_ordering(311) 00:08:55.792 fused_ordering(312) 00:08:55.792 fused_ordering(313) 00:08:55.792 fused_ordering(314) 00:08:55.792 fused_ordering(315) 00:08:55.792 fused_ordering(316) 00:08:55.792 fused_ordering(317) 00:08:55.792 fused_ordering(318) 00:08:55.792 fused_ordering(319) 00:08:55.792 fused_ordering(320) 00:08:55.792 fused_ordering(321) 00:08:55.792 fused_ordering(322) 00:08:55.792 fused_ordering(323) 00:08:55.792 fused_ordering(324) 00:08:55.792 fused_ordering(325) 00:08:55.792 fused_ordering(326) 00:08:55.792 fused_ordering(327) 00:08:55.792 fused_ordering(328) 00:08:55.792 fused_ordering(329) 00:08:55.792 fused_ordering(330) 00:08:55.792 fused_ordering(331) 00:08:55.792 fused_ordering(332) 00:08:55.792 fused_ordering(333) 00:08:55.792 fused_ordering(334) 00:08:55.792 fused_ordering(335) 00:08:55.792 fused_ordering(336) 00:08:55.792 fused_ordering(337) 00:08:55.792 fused_ordering(338) 00:08:55.792 fused_ordering(339) 00:08:55.792 fused_ordering(340) 00:08:55.792 fused_ordering(341) 00:08:55.792 fused_ordering(342) 00:08:55.792 fused_ordering(343) 00:08:55.792 fused_ordering(344) 00:08:55.792 fused_ordering(345) 00:08:55.793 fused_ordering(346) 00:08:55.793 fused_ordering(347) 00:08:55.793 fused_ordering(348) 00:08:55.793 fused_ordering(349) 00:08:55.793 fused_ordering(350) 00:08:55.793 fused_ordering(351) 00:08:55.793 fused_ordering(352) 00:08:55.793 fused_ordering(353) 00:08:55.793 fused_ordering(354) 00:08:55.793 fused_ordering(355) 00:08:55.793 fused_ordering(356) 00:08:55.793 fused_ordering(357) 00:08:55.793 fused_ordering(358) 00:08:55.793 fused_ordering(359) 00:08:55.793 fused_ordering(360) 00:08:55.793 fused_ordering(361) 00:08:55.793 fused_ordering(362) 00:08:55.793 fused_ordering(363) 00:08:55.793 fused_ordering(364) 00:08:55.793 fused_ordering(365) 00:08:55.793 fused_ordering(366) 00:08:55.793 fused_ordering(367) 00:08:55.793 fused_ordering(368) 00:08:55.793 fused_ordering(369) 00:08:55.793 fused_ordering(370) 00:08:55.793 fused_ordering(371) 00:08:55.793 fused_ordering(372) 00:08:55.793 fused_ordering(373) 00:08:55.793 fused_ordering(374) 00:08:55.793 fused_ordering(375) 00:08:55.793 fused_ordering(376) 00:08:55.793 fused_ordering(377) 00:08:55.793 fused_ordering(378) 00:08:55.793 fused_ordering(379) 00:08:55.793 fused_ordering(380) 00:08:55.793 fused_ordering(381) 00:08:55.793 fused_ordering(382) 00:08:55.793 fused_ordering(383) 00:08:55.793 fused_ordering(384) 00:08:55.793 fused_ordering(385) 00:08:55.793 fused_ordering(386) 00:08:55.793 fused_ordering(387) 00:08:55.793 fused_ordering(388) 00:08:55.793 fused_ordering(389) 00:08:55.793 fused_ordering(390) 00:08:55.793 fused_ordering(391) 00:08:55.793 fused_ordering(392) 00:08:55.793 fused_ordering(393) 00:08:55.793 fused_ordering(394) 00:08:55.793 fused_ordering(395) 00:08:55.793 fused_ordering(396) 00:08:55.793 fused_ordering(397) 00:08:55.793 fused_ordering(398) 00:08:55.793 fused_ordering(399) 00:08:55.793 fused_ordering(400) 00:08:55.793 fused_ordering(401) 00:08:55.793 fused_ordering(402) 00:08:55.793 fused_ordering(403) 00:08:55.793 fused_ordering(404) 00:08:55.793 fused_ordering(405) 00:08:55.793 fused_ordering(406) 00:08:55.793 fused_ordering(407) 00:08:55.793 fused_ordering(408) 00:08:55.793 fused_ordering(409) 00:08:55.793 fused_ordering(410) 00:08:56.051 fused_ordering(411) 00:08:56.051 fused_ordering(412) 00:08:56.051 fused_ordering(413) 00:08:56.051 fused_ordering(414) 00:08:56.051 fused_ordering(415) 00:08:56.051 fused_ordering(416) 00:08:56.051 fused_ordering(417) 00:08:56.051 fused_ordering(418) 00:08:56.051 fused_ordering(419) 00:08:56.051 fused_ordering(420) 00:08:56.051 fused_ordering(421) 00:08:56.051 fused_ordering(422) 00:08:56.051 fused_ordering(423) 00:08:56.051 fused_ordering(424) 00:08:56.051 fused_ordering(425) 00:08:56.051 fused_ordering(426) 00:08:56.051 fused_ordering(427) 00:08:56.051 fused_ordering(428) 00:08:56.051 fused_ordering(429) 00:08:56.051 fused_ordering(430) 00:08:56.051 fused_ordering(431) 00:08:56.051 fused_ordering(432) 00:08:56.051 fused_ordering(433) 00:08:56.051 fused_ordering(434) 00:08:56.051 fused_ordering(435) 00:08:56.051 fused_ordering(436) 00:08:56.051 fused_ordering(437) 00:08:56.051 fused_ordering(438) 00:08:56.051 fused_ordering(439) 00:08:56.051 fused_ordering(440) 00:08:56.051 fused_ordering(441) 00:08:56.051 fused_ordering(442) 00:08:56.051 fused_ordering(443) 00:08:56.051 fused_ordering(444) 00:08:56.051 fused_ordering(445) 00:08:56.051 fused_ordering(446) 00:08:56.051 fused_ordering(447) 00:08:56.051 fused_ordering(448) 00:08:56.051 fused_ordering(449) 00:08:56.051 fused_ordering(450) 00:08:56.051 fused_ordering(451) 00:08:56.051 fused_ordering(452) 00:08:56.051 fused_ordering(453) 00:08:56.051 fused_ordering(454) 00:08:56.051 fused_ordering(455) 00:08:56.051 fused_ordering(456) 00:08:56.051 fused_ordering(457) 00:08:56.051 fused_ordering(458) 00:08:56.051 fused_ordering(459) 00:08:56.051 fused_ordering(460) 00:08:56.051 fused_ordering(461) 00:08:56.051 fused_ordering(462) 00:08:56.051 fused_ordering(463) 00:08:56.051 fused_ordering(464) 00:08:56.051 fused_ordering(465) 00:08:56.051 fused_ordering(466) 00:08:56.051 fused_ordering(467) 00:08:56.051 fused_ordering(468) 00:08:56.051 fused_ordering(469) 00:08:56.051 fused_ordering(470) 00:08:56.051 fused_ordering(471) 00:08:56.051 fused_ordering(472) 00:08:56.051 fused_ordering(473) 00:08:56.051 fused_ordering(474) 00:08:56.051 fused_ordering(475) 00:08:56.051 fused_ordering(476) 00:08:56.051 fused_ordering(477) 00:08:56.051 fused_ordering(478) 00:08:56.051 fused_ordering(479) 00:08:56.051 fused_ordering(480) 00:08:56.051 fused_ordering(481) 00:08:56.051 fused_ordering(482) 00:08:56.051 fused_ordering(483) 00:08:56.051 fused_ordering(484) 00:08:56.051 fused_ordering(485) 00:08:56.051 fused_ordering(486) 00:08:56.051 fused_ordering(487) 00:08:56.051 fused_ordering(488) 00:08:56.051 fused_ordering(489) 00:08:56.051 fused_ordering(490) 00:08:56.051 fused_ordering(491) 00:08:56.051 fused_ordering(492) 00:08:56.051 fused_ordering(493) 00:08:56.051 fused_ordering(494) 00:08:56.051 fused_ordering(495) 00:08:56.051 fused_ordering(496) 00:08:56.051 fused_ordering(497) 00:08:56.051 fused_ordering(498) 00:08:56.051 fused_ordering(499) 00:08:56.051 fused_ordering(500) 00:08:56.051 fused_ordering(501) 00:08:56.051 fused_ordering(502) 00:08:56.051 fused_ordering(503) 00:08:56.051 fused_ordering(504) 00:08:56.051 fused_ordering(505) 00:08:56.051 fused_ordering(506) 00:08:56.051 fused_ordering(507) 00:08:56.051 fused_ordering(508) 00:08:56.051 fused_ordering(509) 00:08:56.051 fused_ordering(510) 00:08:56.051 fused_ordering(511) 00:08:56.051 fused_ordering(512) 00:08:56.051 fused_ordering(513) 00:08:56.051 fused_ordering(514) 00:08:56.051 fused_ordering(515) 00:08:56.051 fused_ordering(516) 00:08:56.051 fused_ordering(517) 00:08:56.051 fused_ordering(518) 00:08:56.051 fused_ordering(519) 00:08:56.051 fused_ordering(520) 00:08:56.051 fused_ordering(521) 00:08:56.051 fused_ordering(522) 00:08:56.051 fused_ordering(523) 00:08:56.051 fused_ordering(524) 00:08:56.051 fused_ordering(525) 00:08:56.051 fused_ordering(526) 00:08:56.051 fused_ordering(527) 00:08:56.051 fused_ordering(528) 00:08:56.051 fused_ordering(529) 00:08:56.051 fused_ordering(530) 00:08:56.051 fused_ordering(531) 00:08:56.051 fused_ordering(532) 00:08:56.051 fused_ordering(533) 00:08:56.051 fused_ordering(534) 00:08:56.051 fused_ordering(535) 00:08:56.051 fused_ordering(536) 00:08:56.051 fused_ordering(537) 00:08:56.051 fused_ordering(538) 00:08:56.051 fused_ordering(539) 00:08:56.051 fused_ordering(540) 00:08:56.051 fused_ordering(541) 00:08:56.051 fused_ordering(542) 00:08:56.051 fused_ordering(543) 00:08:56.051 fused_ordering(544) 00:08:56.051 fused_ordering(545) 00:08:56.051 fused_ordering(546) 00:08:56.051 fused_ordering(547) 00:08:56.051 fused_ordering(548) 00:08:56.051 fused_ordering(549) 00:08:56.051 fused_ordering(550) 00:08:56.051 fused_ordering(551) 00:08:56.051 fused_ordering(552) 00:08:56.051 fused_ordering(553) 00:08:56.051 fused_ordering(554) 00:08:56.051 fused_ordering(555) 00:08:56.051 fused_ordering(556) 00:08:56.051 fused_ordering(557) 00:08:56.051 fused_ordering(558) 00:08:56.051 fused_ordering(559) 00:08:56.051 fused_ordering(560) 00:08:56.051 fused_ordering(561) 00:08:56.051 fused_ordering(562) 00:08:56.051 fused_ordering(563) 00:08:56.051 fused_ordering(564) 00:08:56.051 fused_ordering(565) 00:08:56.051 fused_ordering(566) 00:08:56.051 fused_ordering(567) 00:08:56.051 fused_ordering(568) 00:08:56.051 fused_ordering(569) 00:08:56.051 fused_ordering(570) 00:08:56.051 fused_ordering(571) 00:08:56.051 fused_ordering(572) 00:08:56.051 fused_ordering(573) 00:08:56.051 fused_ordering(574) 00:08:56.051 fused_ordering(575) 00:08:56.051 fused_ordering(576) 00:08:56.051 fused_ordering(577) 00:08:56.051 fused_ordering(578) 00:08:56.051 fused_ordering(579) 00:08:56.051 fused_ordering(580) 00:08:56.051 fused_ordering(581) 00:08:56.051 fused_ordering(582) 00:08:56.051 fused_ordering(583) 00:08:56.051 fused_ordering(584) 00:08:56.051 fused_ordering(585) 00:08:56.051 fused_ordering(586) 00:08:56.051 fused_ordering(587) 00:08:56.051 fused_ordering(588) 00:08:56.051 fused_ordering(589) 00:08:56.051 fused_ordering(590) 00:08:56.051 fused_ordering(591) 00:08:56.051 fused_ordering(592) 00:08:56.052 fused_ordering(593) 00:08:56.052 fused_ordering(594) 00:08:56.052 fused_ordering(595) 00:08:56.052 fused_ordering(596) 00:08:56.052 fused_ordering(597) 00:08:56.052 fused_ordering(598) 00:08:56.052 fused_ordering(599) 00:08:56.052 fused_ordering(600) 00:08:56.052 fused_ordering(601) 00:08:56.052 fused_ordering(602) 00:08:56.052 fused_ordering(603) 00:08:56.052 fused_ordering(604) 00:08:56.052 fused_ordering(605) 00:08:56.052 fused_ordering(606) 00:08:56.052 fused_ordering(607) 00:08:56.052 fused_ordering(608) 00:08:56.052 fused_ordering(609) 00:08:56.052 fused_ordering(610) 00:08:56.052 fused_ordering(611) 00:08:56.052 fused_ordering(612) 00:08:56.052 fused_ordering(613) 00:08:56.052 fused_ordering(614) 00:08:56.052 fused_ordering(615) 00:08:56.618 fused_ordering(616) 00:08:56.618 fused_ordering(617) 00:08:56.618 fused_ordering(618) 00:08:56.618 fused_ordering(619) 00:08:56.618 fused_ordering(620) 00:08:56.618 fused_ordering(621) 00:08:56.618 fused_ordering(622) 00:08:56.618 fused_ordering(623) 00:08:56.618 fused_ordering(624) 00:08:56.618 fused_ordering(625) 00:08:56.618 fused_ordering(626) 00:08:56.618 fused_ordering(627) 00:08:56.618 fused_ordering(628) 00:08:56.618 fused_ordering(629) 00:08:56.618 fused_ordering(630) 00:08:56.618 fused_ordering(631) 00:08:56.618 fused_ordering(632) 00:08:56.618 fused_ordering(633) 00:08:56.618 fused_ordering(634) 00:08:56.618 fused_ordering(635) 00:08:56.618 fused_ordering(636) 00:08:56.618 fused_ordering(637) 00:08:56.618 fused_ordering(638) 00:08:56.618 fused_ordering(639) 00:08:56.618 fused_ordering(640) 00:08:56.618 fused_ordering(641) 00:08:56.618 fused_ordering(642) 00:08:56.618 fused_ordering(643) 00:08:56.618 fused_ordering(644) 00:08:56.618 fused_ordering(645) 00:08:56.618 fused_ordering(646) 00:08:56.618 fused_ordering(647) 00:08:56.618 fused_ordering(648) 00:08:56.618 fused_ordering(649) 00:08:56.618 fused_ordering(650) 00:08:56.618 fused_ordering(651) 00:08:56.618 fused_ordering(652) 00:08:56.618 fused_ordering(653) 00:08:56.618 fused_ordering(654) 00:08:56.618 fused_ordering(655) 00:08:56.618 fused_ordering(656) 00:08:56.618 fused_ordering(657) 00:08:56.618 fused_ordering(658) 00:08:56.618 fused_ordering(659) 00:08:56.618 fused_ordering(660) 00:08:56.618 fused_ordering(661) 00:08:56.618 fused_ordering(662) 00:08:56.618 fused_ordering(663) 00:08:56.618 fused_ordering(664) 00:08:56.618 fused_ordering(665) 00:08:56.618 fused_ordering(666) 00:08:56.618 fused_ordering(667) 00:08:56.618 fused_ordering(668) 00:08:56.618 fused_ordering(669) 00:08:56.618 fused_ordering(670) 00:08:56.618 fused_ordering(671) 00:08:56.618 fused_ordering(672) 00:08:56.618 fused_ordering(673) 00:08:56.618 fused_ordering(674) 00:08:56.618 fused_ordering(675) 00:08:56.618 fused_ordering(676) 00:08:56.618 fused_ordering(677) 00:08:56.618 fused_ordering(678) 00:08:56.618 fused_ordering(679) 00:08:56.618 fused_ordering(680) 00:08:56.618 fused_ordering(681) 00:08:56.618 fused_ordering(682) 00:08:56.618 fused_ordering(683) 00:08:56.618 fused_ordering(684) 00:08:56.618 fused_ordering(685) 00:08:56.618 fused_ordering(686) 00:08:56.618 fused_ordering(687) 00:08:56.618 fused_ordering(688) 00:08:56.618 fused_ordering(689) 00:08:56.618 fused_ordering(690) 00:08:56.618 fused_ordering(691) 00:08:56.618 fused_ordering(692) 00:08:56.618 fused_ordering(693) 00:08:56.618 fused_ordering(694) 00:08:56.618 fused_ordering(695) 00:08:56.618 fused_ordering(696) 00:08:56.618 fused_ordering(697) 00:08:56.618 fused_ordering(698) 00:08:56.618 fused_ordering(699) 00:08:56.618 fused_ordering(700) 00:08:56.618 fused_ordering(701) 00:08:56.618 fused_ordering(702) 00:08:56.618 fused_ordering(703) 00:08:56.618 fused_ordering(704) 00:08:56.618 fused_ordering(705) 00:08:56.618 fused_ordering(706) 00:08:56.618 fused_ordering(707) 00:08:56.618 fused_ordering(708) 00:08:56.618 fused_ordering(709) 00:08:56.618 fused_ordering(710) 00:08:56.618 fused_ordering(711) 00:08:56.618 fused_ordering(712) 00:08:56.618 fused_ordering(713) 00:08:56.618 fused_ordering(714) 00:08:56.618 fused_ordering(715) 00:08:56.618 fused_ordering(716) 00:08:56.618 fused_ordering(717) 00:08:56.618 fused_ordering(718) 00:08:56.618 fused_ordering(719) 00:08:56.618 fused_ordering(720) 00:08:56.618 fused_ordering(721) 00:08:56.618 fused_ordering(722) 00:08:56.618 fused_ordering(723) 00:08:56.618 fused_ordering(724) 00:08:56.618 fused_ordering(725) 00:08:56.618 fused_ordering(726) 00:08:56.618 fused_ordering(727) 00:08:56.618 fused_ordering(728) 00:08:56.618 fused_ordering(729) 00:08:56.618 fused_ordering(730) 00:08:56.618 fused_ordering(731) 00:08:56.618 fused_ordering(732) 00:08:56.618 fused_ordering(733) 00:08:56.618 fused_ordering(734) 00:08:56.618 fused_ordering(735) 00:08:56.618 fused_ordering(736) 00:08:56.618 fused_ordering(737) 00:08:56.618 fused_ordering(738) 00:08:56.618 fused_ordering(739) 00:08:56.618 fused_ordering(740) 00:08:56.618 fused_ordering(741) 00:08:56.618 fused_ordering(742) 00:08:56.618 fused_ordering(743) 00:08:56.618 fused_ordering(744) 00:08:56.618 fused_ordering(745) 00:08:56.618 fused_ordering(746) 00:08:56.618 fused_ordering(747) 00:08:56.618 fused_ordering(748) 00:08:56.618 fused_ordering(749) 00:08:56.618 fused_ordering(750) 00:08:56.618 fused_ordering(751) 00:08:56.618 fused_ordering(752) 00:08:56.618 fused_ordering(753) 00:08:56.618 fused_ordering(754) 00:08:56.618 fused_ordering(755) 00:08:56.618 fused_ordering(756) 00:08:56.618 fused_ordering(757) 00:08:56.618 fused_ordering(758) 00:08:56.618 fused_ordering(759) 00:08:56.618 fused_ordering(760) 00:08:56.618 fused_ordering(761) 00:08:56.618 fused_ordering(762) 00:08:56.618 fused_ordering(763) 00:08:56.618 fused_ordering(764) 00:08:56.618 fused_ordering(765) 00:08:56.618 fused_ordering(766) 00:08:56.618 fused_ordering(767) 00:08:56.618 fused_ordering(768) 00:08:56.618 fused_ordering(769) 00:08:56.618 fused_ordering(770) 00:08:56.618 fused_ordering(771) 00:08:56.618 fused_ordering(772) 00:08:56.618 fused_ordering(773) 00:08:56.618 fused_ordering(774) 00:08:56.618 fused_ordering(775) 00:08:56.618 fused_ordering(776) 00:08:56.618 fused_ordering(777) 00:08:56.618 fused_ordering(778) 00:08:56.618 fused_ordering(779) 00:08:56.618 fused_ordering(780) 00:08:56.618 fused_ordering(781) 00:08:56.618 fused_ordering(782) 00:08:56.618 fused_ordering(783) 00:08:56.618 fused_ordering(784) 00:08:56.618 fused_ordering(785) 00:08:56.618 fused_ordering(786) 00:08:56.618 fused_ordering(787) 00:08:56.618 fused_ordering(788) 00:08:56.618 fused_ordering(789) 00:08:56.618 fused_ordering(790) 00:08:56.618 fused_ordering(791) 00:08:56.618 fused_ordering(792) 00:08:56.618 fused_ordering(793) 00:08:56.618 fused_ordering(794) 00:08:56.618 fused_ordering(795) 00:08:56.618 fused_ordering(796) 00:08:56.618 fused_ordering(797) 00:08:56.618 fused_ordering(798) 00:08:56.618 fused_ordering(799) 00:08:56.618 fused_ordering(800) 00:08:56.618 fused_ordering(801) 00:08:56.618 fused_ordering(802) 00:08:56.618 fused_ordering(803) 00:08:56.618 fused_ordering(804) 00:08:56.618 fused_ordering(805) 00:08:56.618 fused_ordering(806) 00:08:56.618 fused_ordering(807) 00:08:56.618 fused_ordering(808) 00:08:56.618 fused_ordering(809) 00:08:56.618 fused_ordering(810) 00:08:56.618 fused_ordering(811) 00:08:56.618 fused_ordering(812) 00:08:56.618 fused_ordering(813) 00:08:56.618 fused_ordering(814) 00:08:56.618 fused_ordering(815) 00:08:56.618 fused_ordering(816) 00:08:56.618 fused_ordering(817) 00:08:56.618 fused_ordering(818) 00:08:56.618 fused_ordering(819) 00:08:56.618 fused_ordering(820) 00:08:57.185 fused_ordering(821) 00:08:57.185 fused_ordering(822) 00:08:57.185 fused_ordering(823) 00:08:57.185 fused_ordering(824) 00:08:57.185 fused_ordering(825) 00:08:57.185 fused_ordering(826) 00:08:57.185 fused_ordering(827) 00:08:57.185 fused_ordering(828) 00:08:57.185 fused_ordering(829) 00:08:57.185 fused_ordering(830) 00:08:57.185 fused_ordering(831) 00:08:57.185 fused_ordering(832) 00:08:57.185 fused_ordering(833) 00:08:57.185 fused_ordering(834) 00:08:57.185 fused_ordering(835) 00:08:57.185 fused_ordering(836) 00:08:57.185 fused_ordering(837) 00:08:57.185 fused_ordering(838) 00:08:57.185 fused_ordering(839) 00:08:57.185 fused_ordering(840) 00:08:57.185 fused_ordering(841) 00:08:57.185 fused_ordering(842) 00:08:57.185 fused_ordering(843) 00:08:57.185 fused_ordering(844) 00:08:57.185 fused_ordering(845) 00:08:57.185 fused_ordering(846) 00:08:57.185 fused_ordering(847) 00:08:57.185 fused_ordering(848) 00:08:57.185 fused_ordering(849) 00:08:57.185 fused_ordering(850) 00:08:57.185 fused_ordering(851) 00:08:57.185 fused_ordering(852) 00:08:57.185 fused_ordering(853) 00:08:57.185 fused_ordering(854) 00:08:57.185 fused_ordering(855) 00:08:57.185 fused_ordering(856) 00:08:57.185 fused_ordering(857) 00:08:57.185 fused_ordering(858) 00:08:57.185 fused_ordering(859) 00:08:57.185 fused_ordering(860) 00:08:57.185 fused_ordering(861) 00:08:57.185 fused_ordering(862) 00:08:57.185 fused_ordering(863) 00:08:57.185 fused_ordering(864) 00:08:57.185 fused_ordering(865) 00:08:57.185 fused_ordering(866) 00:08:57.185 fused_ordering(867) 00:08:57.185 fused_ordering(868) 00:08:57.185 fused_ordering(869) 00:08:57.185 fused_ordering(870) 00:08:57.185 fused_ordering(871) 00:08:57.185 fused_ordering(872) 00:08:57.185 fused_ordering(873) 00:08:57.185 fused_ordering(874) 00:08:57.185 fused_ordering(875) 00:08:57.185 fused_ordering(876) 00:08:57.185 fused_ordering(877) 00:08:57.185 fused_ordering(878) 00:08:57.185 fused_ordering(879) 00:08:57.185 fused_ordering(880) 00:08:57.185 fused_ordering(881) 00:08:57.185 fused_ordering(882) 00:08:57.185 fused_ordering(883) 00:08:57.185 fused_ordering(884) 00:08:57.185 fused_ordering(885) 00:08:57.185 fused_ordering(886) 00:08:57.185 fused_ordering(887) 00:08:57.185 fused_ordering(888) 00:08:57.185 fused_ordering(889) 00:08:57.185 fused_ordering(890) 00:08:57.185 fused_ordering(891) 00:08:57.185 fused_ordering(892) 00:08:57.185 fused_ordering(893) 00:08:57.185 fused_ordering(894) 00:08:57.185 fused_ordering(895) 00:08:57.185 fused_ordering(896) 00:08:57.185 fused_ordering(897) 00:08:57.185 fused_ordering(898) 00:08:57.185 fused_ordering(899) 00:08:57.185 fused_ordering(900) 00:08:57.185 fused_ordering(901) 00:08:57.185 fused_ordering(902) 00:08:57.185 fused_ordering(903) 00:08:57.185 fused_ordering(904) 00:08:57.185 fused_ordering(905) 00:08:57.185 fused_ordering(906) 00:08:57.185 fused_ordering(907) 00:08:57.185 fused_ordering(908) 00:08:57.185 fused_ordering(909) 00:08:57.185 fused_ordering(910) 00:08:57.185 fused_ordering(911) 00:08:57.185 fused_ordering(912) 00:08:57.185 fused_ordering(913) 00:08:57.185 fused_ordering(914) 00:08:57.185 fused_ordering(915) 00:08:57.185 fused_ordering(916) 00:08:57.185 fused_ordering(917) 00:08:57.185 fused_ordering(918) 00:08:57.185 fused_ordering(919) 00:08:57.185 fused_ordering(920) 00:08:57.185 fused_ordering(921) 00:08:57.185 fused_ordering(922) 00:08:57.185 fused_ordering(923) 00:08:57.185 fused_ordering(924) 00:08:57.185 fused_ordering(925) 00:08:57.185 fused_ordering(926) 00:08:57.185 fused_ordering(927) 00:08:57.185 fused_ordering(928) 00:08:57.185 fused_ordering(929) 00:08:57.185 fused_ordering(930) 00:08:57.185 fused_ordering(931) 00:08:57.185 fused_ordering(932) 00:08:57.185 fused_ordering(933) 00:08:57.185 fused_ordering(934) 00:08:57.185 fused_ordering(935) 00:08:57.185 fused_ordering(936) 00:08:57.185 fused_ordering(937) 00:08:57.185 fused_ordering(938) 00:08:57.185 fused_ordering(939) 00:08:57.185 fused_ordering(940) 00:08:57.185 fused_ordering(941) 00:08:57.185 fused_ordering(942) 00:08:57.185 fused_ordering(943) 00:08:57.185 fused_ordering(944) 00:08:57.185 fused_ordering(945) 00:08:57.185 fused_ordering(946) 00:08:57.185 fused_ordering(947) 00:08:57.185 fused_ordering(948) 00:08:57.185 fused_ordering(949) 00:08:57.185 fused_ordering(950) 00:08:57.185 fused_ordering(951) 00:08:57.185 fused_ordering(952) 00:08:57.185 fused_ordering(953) 00:08:57.185 fused_ordering(954) 00:08:57.185 fused_ordering(955) 00:08:57.185 fused_ordering(956) 00:08:57.185 fused_ordering(957) 00:08:57.185 fused_ordering(958) 00:08:57.185 fused_ordering(959) 00:08:57.186 fused_ordering(960) 00:08:57.186 fused_ordering(961) 00:08:57.186 fused_ordering(962) 00:08:57.186 fused_ordering(963) 00:08:57.186 fused_ordering(964) 00:08:57.186 fused_ordering(965) 00:08:57.186 fused_ordering(966) 00:08:57.186 fused_ordering(967) 00:08:57.186 fused_ordering(968) 00:08:57.186 fused_ordering(969) 00:08:57.186 fused_ordering(970) 00:08:57.186 fused_ordering(971) 00:08:57.186 fused_ordering(972) 00:08:57.186 fused_ordering(973) 00:08:57.186 fused_ordering(974) 00:08:57.186 fused_ordering(975) 00:08:57.186 fused_ordering(976) 00:08:57.186 fused_ordering(977) 00:08:57.186 fused_ordering(978) 00:08:57.186 fused_ordering(979) 00:08:57.186 fused_ordering(980) 00:08:57.186 fused_ordering(981) 00:08:57.186 fused_ordering(982) 00:08:57.186 fused_ordering(983) 00:08:57.186 fused_ordering(984) 00:08:57.186 fused_ordering(985) 00:08:57.186 fused_ordering(986) 00:08:57.186 fused_ordering(987) 00:08:57.186 fused_ordering(988) 00:08:57.186 fused_ordering(989) 00:08:57.186 fused_ordering(990) 00:08:57.186 fused_ordering(991) 00:08:57.186 fused_ordering(992) 00:08:57.186 fused_ordering(993) 00:08:57.186 fused_ordering(994) 00:08:57.186 fused_ordering(995) 00:08:57.186 fused_ordering(996) 00:08:57.186 fused_ordering(997) 00:08:57.186 fused_ordering(998) 00:08:57.186 fused_ordering(999) 00:08:57.186 fused_ordering(1000) 00:08:57.186 fused_ordering(1001) 00:08:57.186 fused_ordering(1002) 00:08:57.186 fused_ordering(1003) 00:08:57.186 fused_ordering(1004) 00:08:57.186 fused_ordering(1005) 00:08:57.186 fused_ordering(1006) 00:08:57.186 fused_ordering(1007) 00:08:57.186 fused_ordering(1008) 00:08:57.186 fused_ordering(1009) 00:08:57.186 fused_ordering(1010) 00:08:57.186 fused_ordering(1011) 00:08:57.186 fused_ordering(1012) 00:08:57.186 fused_ordering(1013) 00:08:57.186 fused_ordering(1014) 00:08:57.186 fused_ordering(1015) 00:08:57.186 fused_ordering(1016) 00:08:57.186 fused_ordering(1017) 00:08:57.186 fused_ordering(1018) 00:08:57.186 fused_ordering(1019) 00:08:57.186 fused_ordering(1020) 00:08:57.186 fused_ordering(1021) 00:08:57.186 fused_ordering(1022) 00:08:57.186 fused_ordering(1023) 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.186 rmmod nvme_tcp 00:08:57.186 rmmod nvme_fabrics 00:08:57.186 rmmod nvme_keyring 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71425 ']' 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71425 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71425 ']' 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71425 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71425 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71425' 00:08:57.186 killing process with pid 71425 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71425 00:08:57.186 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71425 00:08:57.444 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.444 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.444 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.444 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.444 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.444 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.444 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.445 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.445 14:25:36 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:57.445 00:08:57.445 real 0m4.059s 00:08:57.445 user 0m4.972s 00:08:57.445 sys 0m1.293s 00:08:57.445 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.445 14:25:36 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:57.445 ************************************ 00:08:57.445 END TEST nvmf_fused_ordering 00:08:57.445 ************************************ 00:08:57.445 14:25:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:57.445 14:25:36 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:57.445 14:25:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:57.445 14:25:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.445 14:25:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.445 ************************************ 00:08:57.445 START TEST nvmf_delete_subsystem 00:08:57.445 ************************************ 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:57.445 * Looking for test storage... 00:08:57.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:57.445 14:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:57.445 Cannot find device "nvmf_tgt_br" 00:08:57.445 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:08:57.445 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:57.445 Cannot find device "nvmf_tgt_br2" 00:08:57.445 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:08:57.445 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:57.445 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:57.445 Cannot find device "nvmf_tgt_br" 00:08:57.445 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:08:57.445 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:57.703 Cannot find device "nvmf_tgt_br2" 00:08:57.703 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:08:57.703 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:57.703 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:57.703 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:57.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:57.703 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:08:57.703 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:57.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:57.703 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:57.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:08:57.704 00:08:57.704 --- 10.0.0.2 ping statistics --- 00:08:57.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.704 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:57.704 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:57.704 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:08:57.704 00:08:57.704 --- 10.0.0.3 ping statistics --- 00:08:57.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.704 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:57.704 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:57.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:08:57.962 00:08:57.962 --- 10.0.0.1 ping statistics --- 00:08:57.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.962 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71683 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71683 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71683 ']' 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.962 14:25:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.962 [2024-07-15 14:25:37.402910] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:08:57.962 [2024-07-15 14:25:37.403663] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.962 [2024-07-15 14:25:37.550162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:58.221 [2024-07-15 14:25:37.620243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.221 [2024-07-15 14:25:37.620531] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.221 [2024-07-15 14:25:37.620800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.221 [2024-07-15 14:25:37.621011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.221 [2024-07-15 14:25:37.621242] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.221 [2024-07-15 14:25:37.621386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.221 [2024-07-15 14:25:37.621400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.787 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.787 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:58.787 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.787 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.787 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.045 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.045 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.045 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.045 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.045 [2024-07-15 14:25:38.422389] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.045 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.045 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:59.045 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.045 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.046 [2024-07-15 14:25:38.438471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.046 NULL1 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.046 Delay0 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71740 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:59.046 14:25:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:59.046 [2024-07-15 14:25:38.633456] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:00.948 14:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.948 14:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.948 14:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 starting I/O failed: -6 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 starting I/O failed: -6 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 starting I/O failed: -6 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 starting I/O failed: -6 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 starting I/O failed: -6 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 starting I/O failed: -6 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 starting I/O failed: -6 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 starting I/O failed: -6 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 starting I/O failed: -6 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 starting I/O failed: -6 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 starting I/O failed: -6 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 [2024-07-15 14:25:40.668518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aada80 is same with the state(5) to be set 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Write completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.207 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 Write completed with error (sct=0, sc=8) 00:09:01.208 Read completed with error (sct=0, sc=8) 00:09:01.208 starting I/O failed: -6 00:09:01.208 starting I/O failed: -6 00:09:01.208 starting I/O failed: -6 00:09:01.208 starting I/O failed: -6 00:09:01.208 starting I/O failed: -6 00:09:01.208 starting I/O failed: -6 00:09:02.142 [2024-07-15 14:25:41.651428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a510 is same with the state(5) to be set 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 [2024-07-15 14:25:41.668520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d1c00cfe0 is same with the state(5) to be set 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 [2024-07-15 14:25:41.669264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5d1c00d600 is same with the state(5) to be set 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 [2024-07-15 14:25:41.669637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8a6f0 is same with the state(5) to be set 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Write completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.142 Read completed with error (sct=0, sc=8) 00:09:02.143 Read completed with error (sct=0, sc=8) 00:09:02.143 Write completed with error (sct=0, sc=8) 00:09:02.143 Write completed with error (sct=0, sc=8) 00:09:02.143 Write completed with error (sct=0, sc=8) 00:09:02.143 [2024-07-15 14:25:41.669936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aac4c0 is same with the state(5) to be set 00:09:02.143 Initializing NVMe Controllers 00:09:02.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:02.143 Controller IO queue size 128, less than required. 00:09:02.143 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:02.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:02.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:02.143 Initialization complete. Launching workers. 00:09:02.143 ======================================================== 00:09:02.143 Latency(us) 00:09:02.143 Device Information : IOPS MiB/s Average min max 00:09:02.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.85 0.08 896470.69 439.83 1010862.08 00:09:02.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 174.81 0.09 979437.43 969.78 2001043.17 00:09:02.143 ======================================================== 00:09:02.143 Total : 343.66 0.17 938673.43 439.83 2001043.17 00:09:02.143 00:09:02.143 [2024-07-15 14:25:41.670784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8a510 (9): Bad file descriptor 00:09:02.143 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:02.143 14:25:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.143 14:25:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:02.143 14:25:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71740 00:09:02.143 14:25:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71740 00:09:02.749 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71740) - No such process 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71740 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71740 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71740 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:02.749 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.750 [2024-07-15 14:25:42.196768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71785 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71785 00:09:02.750 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:03.021 [2024-07-15 14:25:42.365849] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:03.277 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:03.277 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71785 00:09:03.277 14:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:03.841 14:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:03.841 14:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71785 00:09:03.841 14:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:04.403 14:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:04.403 14:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71785 00:09:04.403 14:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:04.661 14:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:04.661 14:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71785 00:09:04.661 14:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:05.224 14:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:05.224 14:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71785 00:09:05.224 14:25:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:05.789 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:05.789 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71785 00:09:05.789 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:06.047 Initializing NVMe Controllers 00:09:06.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:06.047 Controller IO queue size 128, less than required. 00:09:06.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:06.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:06.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:06.047 Initialization complete. Launching workers. 00:09:06.047 ======================================================== 00:09:06.047 Latency(us) 00:09:06.047 Device Information : IOPS MiB/s Average min max 00:09:06.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003330.43 1000157.36 1042623.74 00:09:06.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005315.83 1000267.29 1041937.51 00:09:06.047 ======================================================== 00:09:06.047 Total : 256.00 0.12 1004323.13 1000157.36 1042623.74 00:09:06.047 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71785 00:09:06.307 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71785) - No such process 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71785 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.307 rmmod nvme_tcp 00:09:06.307 rmmod nvme_fabrics 00:09:06.307 rmmod nvme_keyring 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71683 ']' 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71683 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71683 ']' 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71683 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71683 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.307 killing process with pid 71683 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71683' 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71683 00:09:06.307 14:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71683 00:09:06.566 14:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.566 14:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.566 14:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.566 14:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.566 14:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.566 14:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.566 14:25:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.566 14:25:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.566 14:25:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:06.566 00:09:06.566 real 0m9.197s 00:09:06.566 user 0m28.602s 00:09:06.566 sys 0m1.511s 00:09:06.566 14:25:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.566 14:25:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.566 ************************************ 00:09:06.566 END TEST nvmf_delete_subsystem 00:09:06.566 ************************************ 00:09:06.566 14:25:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:06.566 14:25:46 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:06.566 14:25:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:06.566 14:25:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.566 14:25:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.566 ************************************ 00:09:06.566 START TEST nvmf_ns_masking 00:09:06.566 ************************************ 00:09:06.566 14:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:06.825 * Looking for test storage... 00:09:06.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.825 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=04a9b319-4e50-46ce-9108-fd652459aed1 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=609af06e-864f-4b39-8290-5f337282d7c7 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6807b6dd-6cc0-4539-9f5e-6576310b3dc1 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:06.826 Cannot find device "nvmf_tgt_br" 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:06.826 Cannot find device "nvmf_tgt_br2" 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:06.826 Cannot find device "nvmf_tgt_br" 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:06.826 Cannot find device "nvmf_tgt_br2" 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:06.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:06.826 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:06.826 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:07.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:09:07.085 00:09:07.085 --- 10.0.0.2 ping statistics --- 00:09:07.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.085 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:07.085 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:07.085 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:09:07.085 00:09:07.085 --- 10.0.0.3 ping statistics --- 00:09:07.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.085 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:07.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:07.085 00:09:07.085 --- 10.0.0.1 ping statistics --- 00:09:07.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.085 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=72027 00:09:07.085 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 72027 00:09:07.086 14:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72027 ']' 00:09:07.086 14:25:46 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:07.086 14:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.086 14:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:07.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.086 14:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.086 14:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:07.086 14:25:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:07.086 [2024-07-15 14:25:46.662864] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:07.086 [2024-07-15 14:25:46.662991] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.343 [2024-07-15 14:25:46.806299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.343 [2024-07-15 14:25:46.876539] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.343 [2024-07-15 14:25:46.876594] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.343 [2024-07-15 14:25:46.876606] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.343 [2024-07-15 14:25:46.876615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.343 [2024-07-15 14:25:46.876622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.343 [2024-07-15 14:25:46.876653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.276 14:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:08.276 14:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:08.276 14:25:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:08.276 14:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:08.276 14:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:08.276 14:25:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.276 14:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:08.537 [2024-07-15 14:25:47.972187] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.537 14:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:08.537 14:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:08.537 14:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:08.805 Malloc1 00:09:08.805 14:25:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:09.063 Malloc2 00:09:09.063 14:25:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:09.351 14:25:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:09.609 14:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.867 [2024-07-15 14:25:49.289016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.867 14:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:09.868 14:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6807b6dd-6cc0-4539-9f5e-6576310b3dc1 -a 10.0.0.2 -s 4420 -i 4 00:09:09.868 14:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.868 14:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:09.868 14:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.868 14:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:09.868 14:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:12.397 [ 0]:0x1 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fcbf0b01f40416fa8cfe21d8b257869 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fcbf0b01f40416fa8cfe21d8b257869 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:12.397 [ 0]:0x1 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fcbf0b01f40416fa8cfe21d8b257869 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fcbf0b01f40416fa8cfe21d8b257869 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:12.397 [ 1]:0x2 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=117685a8da654815832bde6fbbd525ca 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 117685a8da654815832bde6fbbd525ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:12.397 14:25:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:12.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.655 14:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.913 14:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:13.176 14:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:13.176 14:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6807b6dd-6cc0-4539-9f5e-6576310b3dc1 -a 10.0.0.2 -s 4420 -i 4 00:09:13.176 14:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:13.176 14:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:13.176 14:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.176 14:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:13.176 14:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:13.176 14:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:15.709 [ 0]:0x2 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=117685a8da654815832bde6fbbd525ca 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 117685a8da654815832bde6fbbd525ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:15.709 14:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:15.709 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:15.709 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:15.709 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:15.709 [ 0]:0x1 00:09:15.709 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:15.709 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:15.967 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fcbf0b01f40416fa8cfe21d8b257869 00:09:15.967 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fcbf0b01f40416fa8cfe21d8b257869 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:15.967 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:15.967 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:15.967 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:15.967 [ 1]:0x2 00:09:15.967 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:15.967 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:15.967 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=117685a8da654815832bde6fbbd525ca 00:09:15.967 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 117685a8da654815832bde6fbbd525ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:15.967 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:16.226 [ 0]:0x2 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=117685a8da654815832bde6fbbd525ca 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 117685a8da654815832bde6fbbd525ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:16.226 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.485 14:25:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:16.743 14:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:16.743 14:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6807b6dd-6cc0-4539-9f5e-6576310b3dc1 -a 10.0.0.2 -s 4420 -i 4 00:09:16.743 14:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:16.743 14:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:16.743 14:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.743 14:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:16.743 14:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:16.743 14:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:18.644 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:18.644 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.644 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:18.644 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:18.644 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.644 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:18.644 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:18.644 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:18.902 [ 0]:0x1 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4fcbf0b01f40416fa8cfe21d8b257869 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4fcbf0b01f40416fa8cfe21d8b257869 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:18.902 [ 1]:0x2 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=117685a8da654815832bde6fbbd525ca 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 117685a8da654815832bde6fbbd525ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:18.902 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:19.159 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:19.159 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:19.159 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:19.159 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:19.159 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.159 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:19.159 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.159 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:19.160 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:19.160 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:19.160 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:19.160 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:19.417 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:19.417 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.417 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:19.417 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:19.417 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:19.417 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:19.417 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:19.417 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:19.417 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:19.417 [ 0]:0x2 00:09:19.417 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:19.417 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=117685a8da654815832bde6fbbd525ca 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 117685a8da654815832bde6fbbd525ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:19.418 14:25:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:19.675 [2024-07-15 14:25:59.083368] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:19.675 2024/07/15 14:25:59 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:09:19.675 request: 00:09:19.675 { 00:09:19.675 "method": "nvmf_ns_remove_host", 00:09:19.675 "params": { 00:09:19.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:19.675 "nsid": 2, 00:09:19.675 "host": "nqn.2016-06.io.spdk:host1" 00:09:19.675 } 00:09:19.675 } 00:09:19.675 Got JSON-RPC error response 00:09:19.675 GoRPCClient: error on JSON-RPC call 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:19.675 [ 0]:0x2 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=117685a8da654815832bde6fbbd525ca 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 117685a8da654815832bde6fbbd525ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:19.675 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:19.932 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72407 00:09:19.932 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.932 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:19.932 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72407 /var/tmp/host.sock 00:09:19.932 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72407 ']' 00:09:19.932 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:19.932 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:19.932 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:19.933 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:19.933 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:19.933 [2024-07-15 14:25:59.334641] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:19.933 [2024-07-15 14:25:59.334758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72407 ] 00:09:19.933 [2024-07-15 14:25:59.471087] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.193 [2024-07-15 14:25:59.528963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.193 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.193 14:25:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:20.193 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.451 14:25:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.709 14:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 04a9b319-4e50-46ce-9108-fd652459aed1 00:09:20.709 14:26:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:20.709 14:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 04A9B3194E5046CE9108FD652459AED1 -i 00:09:20.967 14:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 609af06e-864f-4b39-8290-5f337282d7c7 00:09:20.967 14:26:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:20.967 14:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 609AF06E864F4B3982905F337282D7C7 -i 00:09:21.224 14:26:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:21.483 14:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:21.740 14:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:21.740 14:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:21.998 nvme0n1 00:09:22.255 14:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:22.255 14:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:22.511 nvme1n2 00:09:22.511 14:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:22.511 14:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:22.511 14:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:22.511 14:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:22.511 14:26:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:22.769 14:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:22.769 14:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:22.769 14:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:22.769 14:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:23.026 14:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 04a9b319-4e50-46ce-9108-fd652459aed1 == \0\4\a\9\b\3\1\9\-\4\e\5\0\-\4\6\c\e\-\9\1\0\8\-\f\d\6\5\2\4\5\9\a\e\d\1 ]] 00:09:23.026 14:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:23.026 14:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:23.026 14:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:23.284 14:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 609af06e-864f-4b39-8290-5f337282d7c7 == \6\0\9\a\f\0\6\e\-\8\6\4\f\-\4\b\3\9\-\8\2\9\0\-\5\f\3\3\7\2\8\2\d\7\c\7 ]] 00:09:23.284 14:26:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72407 00:09:23.284 14:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72407 ']' 00:09:23.284 14:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72407 00:09:23.284 14:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:23.284 14:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:23.284 14:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72407 00:09:23.284 killing process with pid 72407 00:09:23.284 14:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:23.284 14:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:23.284 14:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72407' 00:09:23.284 14:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72407 00:09:23.284 14:26:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72407 00:09:23.542 14:26:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:23.800 rmmod nvme_tcp 00:09:23.800 rmmod nvme_fabrics 00:09:23.800 rmmod nvme_keyring 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 72027 ']' 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 72027 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72027 ']' 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72027 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:23.800 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72027 00:09:24.058 killing process with pid 72027 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72027' 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72027 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72027 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:24.058 00:09:24.058 real 0m17.526s 00:09:24.058 user 0m27.619s 00:09:24.058 sys 0m2.589s 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.058 14:26:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:24.058 ************************************ 00:09:24.058 END TEST nvmf_ns_masking 00:09:24.058 ************************************ 00:09:24.317 14:26:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:24.317 14:26:03 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:09:24.317 14:26:03 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:09:24.317 14:26:03 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:24.317 14:26:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:24.317 14:26:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.317 14:26:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:24.317 ************************************ 00:09:24.317 START TEST nvmf_host_management 00:09:24.317 ************************************ 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:24.317 * Looking for test storage... 00:09:24.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.317 14:26:03 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:24.318 Cannot find device "nvmf_tgt_br" 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:24.318 Cannot find device "nvmf_tgt_br2" 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:24.318 Cannot find device "nvmf_tgt_br" 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:24.318 Cannot find device "nvmf_tgt_br2" 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:24.318 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:24.577 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:24.577 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.577 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:24.577 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:24.577 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.577 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:24.577 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:24.577 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:24.577 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:24.577 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:24.577 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:24.577 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:24.577 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:24.577 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:24.577 14:26:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:24.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:09:24.577 00:09:24.577 --- 10.0.0.2 ping statistics --- 00:09:24.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.577 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:24.577 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:24.577 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:09:24.577 00:09:24.577 --- 10.0.0.3 ping statistics --- 00:09:24.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.577 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:24.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:24.577 00:09:24.577 --- 10.0.0.1 ping statistics --- 00:09:24.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.577 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=72751 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 72751 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72751 ']' 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.577 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:24.836 [2024-07-15 14:26:04.197341] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:24.836 [2024-07-15 14:26:04.197458] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.836 [2024-07-15 14:26:04.337060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.836 [2024-07-15 14:26:04.396853] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.836 [2024-07-15 14:26:04.396909] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.836 [2024-07-15 14:26:04.396921] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.836 [2024-07-15 14:26:04.396929] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.836 [2024-07-15 14:26:04.396937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.836 [2024-07-15 14:26:04.397053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.836 [2024-07-15 14:26:04.397742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.836 [2024-07-15 14:26:04.397856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:24.836 [2024-07-15 14:26:04.397859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:25.095 [2024-07-15 14:26:04.518841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:25.095 Malloc0 00:09:25.095 [2024-07-15 14:26:04.589528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=72804 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 72804 /var/tmp/bdevperf.sock 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72804 ']' 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:25.095 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.096 14:26:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:25.096 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:25.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:25.096 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:25.096 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.096 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:25.096 14:26:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:25.096 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:25.096 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:25.096 { 00:09:25.096 "params": { 00:09:25.096 "name": "Nvme$subsystem", 00:09:25.096 "trtype": "$TEST_TRANSPORT", 00:09:25.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:25.096 "adrfam": "ipv4", 00:09:25.096 "trsvcid": "$NVMF_PORT", 00:09:25.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:25.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:25.096 "hdgst": ${hdgst:-false}, 00:09:25.096 "ddgst": ${ddgst:-false} 00:09:25.096 }, 00:09:25.096 "method": "bdev_nvme_attach_controller" 00:09:25.096 } 00:09:25.096 EOF 00:09:25.096 )") 00:09:25.096 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:25.096 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:25.096 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:25.096 14:26:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:25.096 "params": { 00:09:25.096 "name": "Nvme0", 00:09:25.096 "trtype": "tcp", 00:09:25.096 "traddr": "10.0.0.2", 00:09:25.096 "adrfam": "ipv4", 00:09:25.096 "trsvcid": "4420", 00:09:25.096 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:25.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:25.096 "hdgst": false, 00:09:25.096 "ddgst": false 00:09:25.096 }, 00:09:25.096 "method": "bdev_nvme_attach_controller" 00:09:25.096 }' 00:09:25.354 [2024-07-15 14:26:04.703587] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:25.354 [2024-07-15 14:26:04.703684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72804 ] 00:09:25.354 [2024-07-15 14:26:04.870394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.354 [2024-07-15 14:26:04.948006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.613 Running I/O for 10 seconds... 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:26.181 [2024-07-15 14:26:05.718939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:26.181 [2024-07-15 14:26:05.718988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.719004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:26.181 [2024-07-15 14:26:05.719014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.719024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:26.181 [2024-07-15 14:26:05.719034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.181 [2024-07-15 14:26:05.719044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:26.181 [2024-07-15 14:26:05.719054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.719063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x606af0 is same with the state(5) to be set 00:09:26.181 14:26:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:26.181 [2024-07-15 14:26:05.730939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x606af0 (9): Bad file descriptor 00:09:26.181 [2024-07-15 14:26:05.731035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.181 [2024-07-15 14:26:05.731053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.731072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.181 [2024-07-15 14:26:05.731083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.731094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.181 [2024-07-15 14:26:05.731103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.731115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.181 [2024-07-15 14:26:05.731124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.731136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.181 [2024-07-15 14:26:05.731145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.731156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.181 [2024-07-15 14:26:05.731166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.731177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.181 [2024-07-15 14:26:05.731186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.731198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.181 [2024-07-15 14:26:05.731207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.731219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.181 [2024-07-15 14:26:05.731228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.731239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.181 [2024-07-15 14:26:05.731248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.731259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.181 [2024-07-15 14:26:05.731268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.731280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.181 [2024-07-15 14:26:05.731290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.181 [2024-07-15 14:26:05.731301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.181 [2024-07-15 14:26:05.731311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.731988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.731997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.182 [2024-07-15 14:26:05.732412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.182 [2024-07-15 14:26:05.732473] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x606820 was disconnected and freed. reset controller. 00:09:26.182 [2024-07-15 14:26:05.733658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:26.182 task offset: 114688 on job bdev=Nvme0n1 fails 00:09:26.182 00:09:26.182 Latency(us) 00:09:26.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.182 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:26.182 Job: Nvme0n1 ended in about 0.64 seconds with error 00:09:26.182 Verification LBA range: start 0x0 length 0x400 00:09:26.182 Nvme0n1 : 0.64 1402.40 87.65 100.17 0.00 41317.59 1891.61 42896.29 00:09:26.182 =================================================================================================================== 00:09:26.182 Total : 1402.40 87.65 100.17 0.00 41317.59 1891.61 42896.29 00:09:26.182 [2024-07-15 14:26:05.735679] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:26.182 [2024-07-15 14:26:05.741965] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:27.576 14:26:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 72804 00:09:27.576 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72804) - No such process 00:09:27.576 14:26:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:27.576 14:26:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:27.576 14:26:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:27.576 14:26:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:27.576 14:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:09:27.576 14:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:09:27.576 14:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:27.576 14:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:27.576 { 00:09:27.576 "params": { 00:09:27.576 "name": "Nvme$subsystem", 00:09:27.576 "trtype": "$TEST_TRANSPORT", 00:09:27.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:27.576 "adrfam": "ipv4", 00:09:27.576 "trsvcid": "$NVMF_PORT", 00:09:27.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:27.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:27.576 "hdgst": ${hdgst:-false}, 00:09:27.576 "ddgst": ${ddgst:-false} 00:09:27.576 }, 00:09:27.576 "method": "bdev_nvme_attach_controller" 00:09:27.576 } 00:09:27.576 EOF 00:09:27.576 )") 00:09:27.576 14:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:09:27.576 14:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:09:27.576 14:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:09:27.576 14:26:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:27.576 "params": { 00:09:27.576 "name": "Nvme0", 00:09:27.576 "trtype": "tcp", 00:09:27.576 "traddr": "10.0.0.2", 00:09:27.576 "adrfam": "ipv4", 00:09:27.576 "trsvcid": "4420", 00:09:27.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:27.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:27.576 "hdgst": false, 00:09:27.576 "ddgst": false 00:09:27.576 }, 00:09:27.576 "method": "bdev_nvme_attach_controller" 00:09:27.576 }' 00:09:27.576 [2024-07-15 14:26:06.776161] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:27.576 [2024-07-15 14:26:06.776254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72854 ] 00:09:27.576 [2024-07-15 14:26:06.909329] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.576 [2024-07-15 14:26:06.993877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.576 Running I/O for 1 seconds... 00:09:28.979 00:09:28.979 Latency(us) 00:09:28.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.979 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:28.979 Verification LBA range: start 0x0 length 0x400 00:09:28.979 Nvme0n1 : 1.01 1451.44 90.72 0.00 0.00 43110.71 5391.83 48615.80 00:09:28.979 =================================================================================================================== 00:09:28.979 Total : 1451.44 90.72 0.00 0.00 43110.71 5391.83 48615.80 00:09:28.979 14:26:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:28.979 14:26:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:28.979 14:26:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.980 rmmod nvme_tcp 00:09:28.980 rmmod nvme_fabrics 00:09:28.980 rmmod nvme_keyring 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 72751 ']' 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 72751 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 72751 ']' 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 72751 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72751 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72751' 00:09:28.980 killing process with pid 72751 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 72751 00:09:28.980 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 72751 00:09:29.238 [2024-07-15 14:26:08.616193] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:29.238 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:29.238 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:29.238 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:29.238 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:29.238 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:29.238 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.238 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.238 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.238 14:26:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:29.238 14:26:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:29.238 00:09:29.238 real 0m4.998s 00:09:29.238 user 0m19.360s 00:09:29.238 sys 0m1.202s 00:09:29.238 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.238 14:26:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:29.238 ************************************ 00:09:29.238 END TEST nvmf_host_management 00:09:29.238 ************************************ 00:09:29.238 14:26:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:29.238 14:26:08 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:29.238 14:26:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:29.238 14:26:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.238 14:26:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:29.238 ************************************ 00:09:29.238 START TEST nvmf_lvol 00:09:29.238 ************************************ 00:09:29.238 14:26:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:29.238 * Looking for test storage... 00:09:29.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:29.238 14:26:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.238 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:29.238 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.238 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.238 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.238 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.238 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.238 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.238 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.238 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.238 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.238 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.496 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:09:29.496 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:09:29.496 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.496 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.496 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:29.497 Cannot find device "nvmf_tgt_br" 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:29.497 Cannot find device "nvmf_tgt_br2" 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:29.497 Cannot find device "nvmf_tgt_br" 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:29.497 Cannot find device "nvmf_tgt_br2" 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:29.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:29.497 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:29.498 14:26:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:29.498 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:29.498 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:29.498 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:29.498 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:29.498 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:29.498 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:29.498 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:29.498 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:29.498 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:29.498 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:29.498 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:29.498 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:29.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:09:29.757 00:09:29.757 --- 10.0.0.2 ping statistics --- 00:09:29.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.757 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:29.757 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:29.757 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:09:29.757 00:09:29.757 --- 10.0.0.3 ping statistics --- 00:09:29.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.757 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:29.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:29.757 00:09:29.757 --- 10.0.0.1 ping statistics --- 00:09:29.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.757 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=73068 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 73068 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 73068 ']' 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.757 14:26:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:29.757 [2024-07-15 14:26:09.246573] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:29.757 [2024-07-15 14:26:09.246656] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.015 [2024-07-15 14:26:09.383329] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:30.015 [2024-07-15 14:26:09.442975] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.015 [2024-07-15 14:26:09.443287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.015 [2024-07-15 14:26:09.443725] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.015 [2024-07-15 14:26:09.443970] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.015 [2024-07-15 14:26:09.444162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.015 [2024-07-15 14:26:09.444340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.015 [2024-07-15 14:26:09.444497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.015 [2024-07-15 14:26:09.444575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.946 14:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.946 14:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:09:30.946 14:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:30.946 14:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:30.946 14:26:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:30.946 14:26:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.946 14:26:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:30.946 [2024-07-15 14:26:10.497942] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.946 14:26:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:31.203 14:26:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:31.203 14:26:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:31.768 14:26:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:31.768 14:26:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:32.026 14:26:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:32.325 14:26:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cf082e51-062d-4929-a296-40f0dfab5683 00:09:32.325 14:26:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cf082e51-062d-4929-a296-40f0dfab5683 lvol 20 00:09:32.586 14:26:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4ce343bc-8d09-45ac-8804-a165f7c3907a 00:09:32.586 14:26:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:32.845 14:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4ce343bc-8d09-45ac-8804-a165f7c3907a 00:09:33.103 14:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:33.103 [2024-07-15 14:26:12.676194] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.360 14:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:33.618 14:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73210 00:09:33.618 14:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:33.618 14:26:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:34.551 14:26:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 4ce343bc-8d09-45ac-8804-a165f7c3907a MY_SNAPSHOT 00:09:34.809 14:26:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9b6fbc06-4b66-4d5d-9797-229a9bf6bcff 00:09:34.809 14:26:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 4ce343bc-8d09-45ac-8804-a165f7c3907a 30 00:09:35.067 14:26:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 9b6fbc06-4b66-4d5d-9797-229a9bf6bcff MY_CLONE 00:09:35.325 14:26:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c06aa574-f07c-4e99-bd6a-0fca9eeeee84 00:09:35.325 14:26:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate c06aa574-f07c-4e99-bd6a-0fca9eeeee84 00:09:36.258 14:26:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73210 00:09:44.360 Initializing NVMe Controllers 00:09:44.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:44.360 Controller IO queue size 128, less than required. 00:09:44.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:44.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:44.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:44.360 Initialization complete. Launching workers. 00:09:44.360 ======================================================== 00:09:44.360 Latency(us) 00:09:44.360 Device Information : IOPS MiB/s Average min max 00:09:44.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10559.00 41.25 12127.14 516.93 61506.41 00:09:44.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10636.80 41.55 12039.08 3245.18 59695.68 00:09:44.360 ======================================================== 00:09:44.360 Total : 21195.80 82.80 12082.95 516.93 61506.41 00:09:44.360 00:09:44.360 14:26:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:44.360 14:26:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4ce343bc-8d09-45ac-8804-a165f7c3907a 00:09:44.360 14:26:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cf082e51-062d-4929-a296-40f0dfab5683 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.619 rmmod nvme_tcp 00:09:44.619 rmmod nvme_fabrics 00:09:44.619 rmmod nvme_keyring 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 73068 ']' 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 73068 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 73068 ']' 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 73068 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73068 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:44.619 killing process with pid 73068 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73068' 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 73068 00:09:44.619 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 73068 00:09:44.877 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.877 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.877 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.877 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.877 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.877 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.877 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.877 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.877 14:26:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:44.877 00:09:44.877 real 0m15.673s 00:09:44.877 user 1m5.875s 00:09:44.877 sys 0m3.840s 00:09:44.877 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.877 14:26:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:44.877 ************************************ 00:09:44.877 END TEST nvmf_lvol 00:09:44.877 ************************************ 00:09:44.877 14:26:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:44.877 14:26:24 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:44.877 14:26:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:44.877 14:26:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.877 14:26:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:45.135 ************************************ 00:09:45.135 START TEST nvmf_lvs_grow 00:09:45.135 ************************************ 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:45.135 * Looking for test storage... 00:09:45.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.135 14:26:24 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:45.136 Cannot find device "nvmf_tgt_br" 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:45.136 Cannot find device "nvmf_tgt_br2" 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:45.136 Cannot find device "nvmf_tgt_br" 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:45.136 Cannot find device "nvmf_tgt_br2" 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:45.136 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.395 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:45.395 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:45.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:09:45.396 00:09:45.396 --- 10.0.0.2 ping statistics --- 00:09:45.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.396 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:45.396 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:45.396 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:45.396 00:09:45.396 --- 10.0.0.3 ping statistics --- 00:09:45.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.396 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:45.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:09:45.396 00:09:45.396 --- 10.0.0.1 ping statistics --- 00:09:45.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.396 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73574 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73574 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73574 ']' 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:45.396 14:26:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.672 [2024-07-15 14:26:25.001669] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:45.672 [2024-07-15 14:26:25.001773] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.672 [2024-07-15 14:26:25.140361] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.672 [2024-07-15 14:26:25.208203] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.672 [2024-07-15 14:26:25.208289] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.672 [2024-07-15 14:26:25.208313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.672 [2024-07-15 14:26:25.208331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.672 [2024-07-15 14:26:25.208347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.672 [2024-07-15 14:26:25.208394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.962 14:26:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.962 14:26:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:09:45.962 14:26:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:45.962 14:26:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:45.962 14:26:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.962 14:26:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.962 14:26:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:46.219 [2024-07-15 14:26:25.598537] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:46.219 ************************************ 00:09:46.219 START TEST lvs_grow_clean 00:09:46.219 ************************************ 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:46.219 14:26:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:46.476 14:26:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:46.476 14:26:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:46.732 14:26:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0dde5099-000f-4287-98d6-de10c31d2d04 00:09:46.732 14:26:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dde5099-000f-4287-98d6-de10c31d2d04 00:09:46.732 14:26:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:46.990 14:26:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:46.990 14:26:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:46.990 14:26:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0dde5099-000f-4287-98d6-de10c31d2d04 lvol 150 00:09:47.249 14:26:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0bd87af0-6a68-4c0c-aad8-0bf9f848e90a 00:09:47.249 14:26:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:47.249 14:26:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:47.507 [2024-07-15 14:26:27.010738] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:47.507 [2024-07-15 14:26:27.010857] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:47.507 true 00:09:47.507 14:26:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dde5099-000f-4287-98d6-de10c31d2d04 00:09:47.507 14:26:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:47.766 14:26:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:47.766 14:26:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:48.024 14:26:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0bd87af0-6a68-4c0c-aad8-0bf9f848e90a 00:09:48.283 14:26:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:48.542 [2024-07-15 14:26:27.983214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.542 14:26:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:48.800 14:26:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73722 00:09:48.801 14:26:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:48.801 14:26:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:48.801 14:26:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73722 /var/tmp/bdevperf.sock 00:09:48.801 14:26:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 73722 ']' 00:09:48.801 14:26:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:48.801 14:26:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:48.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:48.801 14:26:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:48.801 14:26:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:48.801 14:26:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:49.059 [2024-07-15 14:26:28.398875] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:09:49.059 [2024-07-15 14:26:28.398987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73722 ] 00:09:49.059 [2024-07-15 14:26:28.537720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.059 [2024-07-15 14:26:28.595351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.991 14:26:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.991 14:26:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:09:49.991 14:26:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:50.249 Nvme0n1 00:09:50.249 14:26:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:50.507 [ 00:09:50.507 { 00:09:50.507 "aliases": [ 00:09:50.507 "0bd87af0-6a68-4c0c-aad8-0bf9f848e90a" 00:09:50.507 ], 00:09:50.507 "assigned_rate_limits": { 00:09:50.507 "r_mbytes_per_sec": 0, 00:09:50.507 "rw_ios_per_sec": 0, 00:09:50.507 "rw_mbytes_per_sec": 0, 00:09:50.507 "w_mbytes_per_sec": 0 00:09:50.507 }, 00:09:50.507 "block_size": 4096, 00:09:50.507 "claimed": false, 00:09:50.507 "driver_specific": { 00:09:50.507 "mp_policy": "active_passive", 00:09:50.507 "nvme": [ 00:09:50.507 { 00:09:50.507 "ctrlr_data": { 00:09:50.507 "ana_reporting": false, 00:09:50.507 "cntlid": 1, 00:09:50.507 "firmware_revision": "24.09", 00:09:50.507 "model_number": "SPDK bdev Controller", 00:09:50.507 "multi_ctrlr": true, 00:09:50.507 "oacs": { 00:09:50.507 "firmware": 0, 00:09:50.507 "format": 0, 00:09:50.507 "ns_manage": 0, 00:09:50.507 "security": 0 00:09:50.507 }, 00:09:50.508 "serial_number": "SPDK0", 00:09:50.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:50.508 "vendor_id": "0x8086" 00:09:50.508 }, 00:09:50.508 "ns_data": { 00:09:50.508 "can_share": true, 00:09:50.508 "id": 1 00:09:50.508 }, 00:09:50.508 "trid": { 00:09:50.508 "adrfam": "IPv4", 00:09:50.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:50.508 "traddr": "10.0.0.2", 00:09:50.508 "trsvcid": "4420", 00:09:50.508 "trtype": "TCP" 00:09:50.508 }, 00:09:50.508 "vs": { 00:09:50.508 "nvme_version": "1.3" 00:09:50.508 } 00:09:50.508 } 00:09:50.508 ] 00:09:50.508 }, 00:09:50.508 "memory_domains": [ 00:09:50.508 { 00:09:50.508 "dma_device_id": "system", 00:09:50.508 "dma_device_type": 1 00:09:50.508 } 00:09:50.508 ], 00:09:50.508 "name": "Nvme0n1", 00:09:50.508 "num_blocks": 38912, 00:09:50.508 "product_name": "NVMe disk", 00:09:50.508 "supported_io_types": { 00:09:50.508 "abort": true, 00:09:50.508 "compare": true, 00:09:50.508 "compare_and_write": true, 00:09:50.508 "copy": true, 00:09:50.508 "flush": true, 00:09:50.508 "get_zone_info": false, 00:09:50.508 "nvme_admin": true, 00:09:50.508 "nvme_io": true, 00:09:50.508 "nvme_io_md": false, 00:09:50.508 "nvme_iov_md": false, 00:09:50.508 "read": true, 00:09:50.508 "reset": true, 00:09:50.508 "seek_data": false, 00:09:50.508 "seek_hole": false, 00:09:50.508 "unmap": true, 00:09:50.508 "write": true, 00:09:50.508 "write_zeroes": true, 00:09:50.508 "zcopy": false, 00:09:50.508 "zone_append": false, 00:09:50.508 "zone_management": false 00:09:50.508 }, 00:09:50.508 "uuid": "0bd87af0-6a68-4c0c-aad8-0bf9f848e90a", 00:09:50.508 "zoned": false 00:09:50.508 } 00:09:50.508 ] 00:09:50.508 14:26:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73774 00:09:50.508 14:26:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:50.508 14:26:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:50.508 Running I/O for 10 seconds... 00:09:51.879 Latency(us) 00:09:51.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.879 Nvme0n1 : 1.00 8171.00 31.92 0.00 0.00 0.00 0.00 0.00 00:09:51.879 =================================================================================================================== 00:09:51.879 Total : 8171.00 31.92 0.00 0.00 0.00 0.00 0.00 00:09:51.879 00:09:52.447 14:26:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0dde5099-000f-4287-98d6-de10c31d2d04 00:09:52.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.447 Nvme0n1 : 2.00 8163.50 31.89 0.00 0.00 0.00 0.00 0.00 00:09:52.447 =================================================================================================================== 00:09:52.447 Total : 8163.50 31.89 0.00 0.00 0.00 0.00 0.00 00:09:52.447 00:09:52.704 true 00:09:52.704 14:26:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dde5099-000f-4287-98d6-de10c31d2d04 00:09:52.704 14:26:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:53.270 14:26:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:53.270 14:26:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:53.270 14:26:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73774 00:09:53.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.528 Nvme0n1 : 3.00 8173.33 31.93 0.00 0.00 0.00 0.00 0.00 00:09:53.528 =================================================================================================================== 00:09:53.528 Total : 8173.33 31.93 0.00 0.00 0.00 0.00 0.00 00:09:53.528 00:09:54.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.461 Nvme0n1 : 4.00 8140.25 31.80 0.00 0.00 0.00 0.00 0.00 00:09:54.461 =================================================================================================================== 00:09:54.461 Total : 8140.25 31.80 0.00 0.00 0.00 0.00 0.00 00:09:54.461 00:09:55.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.831 Nvme0n1 : 5.00 8060.80 31.49 0.00 0.00 0.00 0.00 0.00 00:09:55.831 =================================================================================================================== 00:09:55.831 Total : 8060.80 31.49 0.00 0.00 0.00 0.00 0.00 00:09:55.831 00:09:56.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.760 Nvme0n1 : 6.00 8024.33 31.35 0.00 0.00 0.00 0.00 0.00 00:09:56.760 =================================================================================================================== 00:09:56.760 Total : 8024.33 31.35 0.00 0.00 0.00 0.00 0.00 00:09:56.760 00:09:57.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.729 Nvme0n1 : 7.00 7950.86 31.06 0.00 0.00 0.00 0.00 0.00 00:09:57.729 =================================================================================================================== 00:09:57.729 Total : 7950.86 31.06 0.00 0.00 0.00 0.00 0.00 00:09:57.729 00:09:58.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.662 Nvme0n1 : 8.00 7894.00 30.84 0.00 0.00 0.00 0.00 0.00 00:09:58.662 =================================================================================================================== 00:09:58.662 Total : 7894.00 30.84 0.00 0.00 0.00 0.00 0.00 00:09:58.662 00:09:59.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.595 Nvme0n1 : 9.00 7867.89 30.73 0.00 0.00 0.00 0.00 0.00 00:09:59.595 =================================================================================================================== 00:09:59.595 Total : 7867.89 30.73 0.00 0.00 0.00 0.00 0.00 00:09:59.595 00:10:00.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.528 Nvme0n1 : 10.00 7821.20 30.55 0.00 0.00 0.00 0.00 0.00 00:10:00.528 =================================================================================================================== 00:10:00.528 Total : 7821.20 30.55 0.00 0.00 0.00 0.00 0.00 00:10:00.528 00:10:00.528 00:10:00.528 Latency(us) 00:10:00.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.528 Nvme0n1 : 10.01 7825.97 30.57 0.00 0.00 16346.82 7864.32 36700.16 00:10:00.528 =================================================================================================================== 00:10:00.528 Total : 7825.97 30.57 0.00 0.00 16346.82 7864.32 36700.16 00:10:00.528 0 00:10:00.528 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73722 00:10:00.528 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 73722 ']' 00:10:00.528 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 73722 00:10:00.528 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:10:00.528 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:00.528 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73722 00:10:00.528 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:00.528 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:00.528 killing process with pid 73722 00:10:00.528 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73722' 00:10:00.528 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 73722 00:10:00.528 Received shutdown signal, test time was about 10.000000 seconds 00:10:00.528 00:10:00.528 Latency(us) 00:10:00.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.528 =================================================================================================================== 00:10:00.528 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:00.528 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 73722 00:10:00.787 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.045 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:01.303 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dde5099-000f-4287-98d6-de10c31d2d04 00:10:01.303 14:26:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:01.866 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:01.866 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:01.866 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:01.866 [2024-07-15 14:26:41.386449] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:01.866 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dde5099-000f-4287-98d6-de10c31d2d04 00:10:01.866 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:10:01.866 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dde5099-000f-4287-98d6-de10c31d2d04 00:10:01.866 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.866 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.867 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.867 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.867 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.867 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:01.867 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.867 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:01.867 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dde5099-000f-4287-98d6-de10c31d2d04 00:10:02.124 2024/07/15 14:26:41 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:0dde5099-000f-4287-98d6-de10c31d2d04], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:02.124 request: 00:10:02.124 { 00:10:02.124 "method": "bdev_lvol_get_lvstores", 00:10:02.124 "params": { 00:10:02.124 "uuid": "0dde5099-000f-4287-98d6-de10c31d2d04" 00:10:02.124 } 00:10:02.124 } 00:10:02.124 Got JSON-RPC error response 00:10:02.124 GoRPCClient: error on JSON-RPC call 00:10:02.124 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:10:02.124 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:02.124 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:02.124 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:02.124 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:02.381 aio_bdev 00:10:02.638 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0bd87af0-6a68-4c0c-aad8-0bf9f848e90a 00:10:02.638 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=0bd87af0-6a68-4c0c-aad8-0bf9f848e90a 00:10:02.638 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:02.638 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:10:02.638 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:02.638 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:02.638 14:26:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:02.638 14:26:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0bd87af0-6a68-4c0c-aad8-0bf9f848e90a -t 2000 00:10:02.896 [ 00:10:02.896 { 00:10:02.896 "aliases": [ 00:10:02.896 "lvs/lvol" 00:10:02.896 ], 00:10:02.896 "assigned_rate_limits": { 00:10:02.896 "r_mbytes_per_sec": 0, 00:10:02.896 "rw_ios_per_sec": 0, 00:10:02.896 "rw_mbytes_per_sec": 0, 00:10:02.896 "w_mbytes_per_sec": 0 00:10:02.896 }, 00:10:02.896 "block_size": 4096, 00:10:02.896 "claimed": false, 00:10:02.896 "driver_specific": { 00:10:02.896 "lvol": { 00:10:02.896 "base_bdev": "aio_bdev", 00:10:02.896 "clone": false, 00:10:02.896 "esnap_clone": false, 00:10:02.896 "lvol_store_uuid": "0dde5099-000f-4287-98d6-de10c31d2d04", 00:10:02.896 "num_allocated_clusters": 38, 00:10:02.896 "snapshot": false, 00:10:02.896 "thin_provision": false 00:10:02.896 } 00:10:02.896 }, 00:10:02.896 "name": "0bd87af0-6a68-4c0c-aad8-0bf9f848e90a", 00:10:02.896 "num_blocks": 38912, 00:10:02.896 "product_name": "Logical Volume", 00:10:02.896 "supported_io_types": { 00:10:02.896 "abort": false, 00:10:02.896 "compare": false, 00:10:02.896 "compare_and_write": false, 00:10:02.896 "copy": false, 00:10:02.896 "flush": false, 00:10:02.896 "get_zone_info": false, 00:10:02.896 "nvme_admin": false, 00:10:02.896 "nvme_io": false, 00:10:02.896 "nvme_io_md": false, 00:10:02.896 "nvme_iov_md": false, 00:10:02.896 "read": true, 00:10:02.896 "reset": true, 00:10:02.896 "seek_data": true, 00:10:02.897 "seek_hole": true, 00:10:02.897 "unmap": true, 00:10:02.897 "write": true, 00:10:02.897 "write_zeroes": true, 00:10:02.897 "zcopy": false, 00:10:02.897 "zone_append": false, 00:10:02.897 "zone_management": false 00:10:02.897 }, 00:10:02.897 "uuid": "0bd87af0-6a68-4c0c-aad8-0bf9f848e90a", 00:10:02.897 "zoned": false 00:10:02.897 } 00:10:02.897 ] 00:10:02.897 14:26:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:10:02.897 14:26:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dde5099-000f-4287-98d6-de10c31d2d04 00:10:02.897 14:26:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:03.155 14:26:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:03.155 14:26:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:03.155 14:26:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0dde5099-000f-4287-98d6-de10c31d2d04 00:10:03.413 14:26:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:03.413 14:26:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0bd87af0-6a68-4c0c-aad8-0bf9f848e90a 00:10:03.979 14:26:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0dde5099-000f-4287-98d6-de10c31d2d04 00:10:03.979 14:26:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:04.237 14:26:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:04.871 ************************************ 00:10:04.871 END TEST lvs_grow_clean 00:10:04.871 ************************************ 00:10:04.871 00:10:04.871 real 0m18.551s 00:10:04.871 user 0m18.046s 00:10:04.871 sys 0m2.133s 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:04.871 ************************************ 00:10:04.871 START TEST lvs_grow_dirty 00:10:04.871 ************************************ 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:04.871 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:05.129 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:05.129 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:05.388 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f9356322-627f-4a63-9f15-1568d02903ea 00:10:05.388 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9356322-627f-4a63-9f15-1568d02903ea 00:10:05.388 14:26:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:05.647 14:26:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:05.647 14:26:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:05.647 14:26:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f9356322-627f-4a63-9f15-1568d02903ea lvol 150 00:10:05.905 14:26:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fe4924de-d2ac-4c2a-aedf-ac9052a110a7 00:10:05.905 14:26:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:05.905 14:26:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:06.164 [2024-07-15 14:26:45.648642] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:06.164 [2024-07-15 14:26:45.648742] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:06.164 true 00:10:06.164 14:26:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9356322-627f-4a63-9f15-1568d02903ea 00:10:06.164 14:26:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:06.422 14:26:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:06.422 14:26:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:06.679 14:26:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fe4924de-d2ac-4c2a-aedf-ac9052a110a7 00:10:06.937 14:26:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:07.195 [2024-07-15 14:26:46.749270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.195 14:26:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:07.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:07.453 14:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74177 00:10:07.453 14:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:07.453 14:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:07.453 14:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74177 /var/tmp/bdevperf.sock 00:10:07.453 14:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74177 ']' 00:10:07.453 14:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:07.453 14:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:07.453 14:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:07.453 14:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:07.453 14:26:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:07.711 [2024-07-15 14:26:47.059248] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:10:07.711 [2024-07-15 14:26:47.059352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74177 ] 00:10:07.711 [2024-07-15 14:26:47.194897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.711 [2024-07-15 14:26:47.255286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.644 14:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.644 14:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:08.644 14:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:08.904 Nvme0n1 00:10:08.904 14:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:09.162 [ 00:10:09.162 { 00:10:09.162 "aliases": [ 00:10:09.162 "fe4924de-d2ac-4c2a-aedf-ac9052a110a7" 00:10:09.162 ], 00:10:09.162 "assigned_rate_limits": { 00:10:09.162 "r_mbytes_per_sec": 0, 00:10:09.162 "rw_ios_per_sec": 0, 00:10:09.162 "rw_mbytes_per_sec": 0, 00:10:09.162 "w_mbytes_per_sec": 0 00:10:09.162 }, 00:10:09.162 "block_size": 4096, 00:10:09.162 "claimed": false, 00:10:09.162 "driver_specific": { 00:10:09.162 "mp_policy": "active_passive", 00:10:09.162 "nvme": [ 00:10:09.162 { 00:10:09.162 "ctrlr_data": { 00:10:09.162 "ana_reporting": false, 00:10:09.162 "cntlid": 1, 00:10:09.162 "firmware_revision": "24.09", 00:10:09.162 "model_number": "SPDK bdev Controller", 00:10:09.162 "multi_ctrlr": true, 00:10:09.162 "oacs": { 00:10:09.162 "firmware": 0, 00:10:09.162 "format": 0, 00:10:09.162 "ns_manage": 0, 00:10:09.162 "security": 0 00:10:09.162 }, 00:10:09.162 "serial_number": "SPDK0", 00:10:09.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:09.162 "vendor_id": "0x8086" 00:10:09.162 }, 00:10:09.162 "ns_data": { 00:10:09.162 "can_share": true, 00:10:09.162 "id": 1 00:10:09.162 }, 00:10:09.162 "trid": { 00:10:09.162 "adrfam": "IPv4", 00:10:09.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:09.162 "traddr": "10.0.0.2", 00:10:09.162 "trsvcid": "4420", 00:10:09.162 "trtype": "TCP" 00:10:09.162 }, 00:10:09.162 "vs": { 00:10:09.162 "nvme_version": "1.3" 00:10:09.162 } 00:10:09.162 } 00:10:09.162 ] 00:10:09.162 }, 00:10:09.162 "memory_domains": [ 00:10:09.162 { 00:10:09.162 "dma_device_id": "system", 00:10:09.162 "dma_device_type": 1 00:10:09.162 } 00:10:09.162 ], 00:10:09.162 "name": "Nvme0n1", 00:10:09.162 "num_blocks": 38912, 00:10:09.162 "product_name": "NVMe disk", 00:10:09.162 "supported_io_types": { 00:10:09.162 "abort": true, 00:10:09.162 "compare": true, 00:10:09.162 "compare_and_write": true, 00:10:09.163 "copy": true, 00:10:09.163 "flush": true, 00:10:09.163 "get_zone_info": false, 00:10:09.163 "nvme_admin": true, 00:10:09.163 "nvme_io": true, 00:10:09.163 "nvme_io_md": false, 00:10:09.163 "nvme_iov_md": false, 00:10:09.163 "read": true, 00:10:09.163 "reset": true, 00:10:09.163 "seek_data": false, 00:10:09.163 "seek_hole": false, 00:10:09.163 "unmap": true, 00:10:09.163 "write": true, 00:10:09.163 "write_zeroes": true, 00:10:09.163 "zcopy": false, 00:10:09.163 "zone_append": false, 00:10:09.163 "zone_management": false 00:10:09.163 }, 00:10:09.163 "uuid": "fe4924de-d2ac-4c2a-aedf-ac9052a110a7", 00:10:09.163 "zoned": false 00:10:09.163 } 00:10:09.163 ] 00:10:09.163 14:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74219 00:10:09.163 14:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:09.163 14:26:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:09.428 Running I/O for 10 seconds... 00:10:10.369 Latency(us) 00:10:10.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.369 Nvme0n1 : 1.00 7909.00 30.89 0.00 0.00 0.00 0.00 0.00 00:10:10.369 =================================================================================================================== 00:10:10.369 Total : 7909.00 30.89 0.00 0.00 0.00 0.00 0.00 00:10:10.369 00:10:11.304 14:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f9356322-627f-4a63-9f15-1568d02903ea 00:10:11.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.304 Nvme0n1 : 2.00 7846.00 30.65 0.00 0.00 0.00 0.00 0.00 00:10:11.304 =================================================================================================================== 00:10:11.304 Total : 7846.00 30.65 0.00 0.00 0.00 0.00 0.00 00:10:11.304 00:10:11.561 true 00:10:11.561 14:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9356322-627f-4a63-9f15-1568d02903ea 00:10:11.561 14:26:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:11.819 14:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:11.819 14:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:11.819 14:26:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74219 00:10:12.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.385 Nvme0n1 : 3.00 7862.33 30.71 0.00 0.00 0.00 0.00 0.00 00:10:12.385 =================================================================================================================== 00:10:12.385 Total : 7862.33 30.71 0.00 0.00 0.00 0.00 0.00 00:10:12.385 00:10:13.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.319 Nvme0n1 : 4.00 7839.00 30.62 0.00 0.00 0.00 0.00 0.00 00:10:13.319 =================================================================================================================== 00:10:13.319 Total : 7839.00 30.62 0.00 0.00 0.00 0.00 0.00 00:10:13.319 00:10:14.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.251 Nvme0n1 : 5.00 7713.00 30.13 0.00 0.00 0.00 0.00 0.00 00:10:14.251 =================================================================================================================== 00:10:14.251 Total : 7713.00 30.13 0.00 0.00 0.00 0.00 0.00 00:10:14.251 00:10:15.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.186 Nvme0n1 : 6.00 7711.17 30.12 0.00 0.00 0.00 0.00 0.00 00:10:15.186 =================================================================================================================== 00:10:15.186 Total : 7711.17 30.12 0.00 0.00 0.00 0.00 0.00 00:10:15.186 00:10:16.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.562 Nvme0n1 : 7.00 7593.86 29.66 0.00 0.00 0.00 0.00 0.00 00:10:16.562 =================================================================================================================== 00:10:16.562 Total : 7593.86 29.66 0.00 0.00 0.00 0.00 0.00 00:10:16.562 00:10:17.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.497 Nvme0n1 : 8.00 7602.62 29.70 0.00 0.00 0.00 0.00 0.00 00:10:17.497 =================================================================================================================== 00:10:17.497 Total : 7602.62 29.70 0.00 0.00 0.00 0.00 0.00 00:10:17.497 00:10:18.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.432 Nvme0n1 : 9.00 7581.44 29.62 0.00 0.00 0.00 0.00 0.00 00:10:18.432 =================================================================================================================== 00:10:18.432 Total : 7581.44 29.62 0.00 0.00 0.00 0.00 0.00 00:10:18.432 00:10:19.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.365 Nvme0n1 : 10.00 7528.00 29.41 0.00 0.00 0.00 0.00 0.00 00:10:19.365 =================================================================================================================== 00:10:19.365 Total : 7528.00 29.41 0.00 0.00 0.00 0.00 0.00 00:10:19.365 00:10:19.365 00:10:19.365 Latency(us) 00:10:19.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.365 Nvme0n1 : 10.01 7536.38 29.44 0.00 0.00 16978.21 2278.87 143940.89 00:10:19.365 =================================================================================================================== 00:10:19.365 Total : 7536.38 29.44 0.00 0.00 16978.21 2278.87 143940.89 00:10:19.365 0 00:10:19.365 14:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74177 00:10:19.365 14:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 74177 ']' 00:10:19.365 14:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 74177 00:10:19.365 14:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:10:19.365 14:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:19.365 14:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74177 00:10:19.365 killing process with pid 74177 00:10:19.365 Received shutdown signal, test time was about 10.000000 seconds 00:10:19.365 00:10:19.365 Latency(us) 00:10:19.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:19.365 =================================================================================================================== 00:10:19.365 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:19.365 14:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:19.365 14:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:19.365 14:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74177' 00:10:19.365 14:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 74177 00:10:19.365 14:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 74177 00:10:19.623 14:26:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:19.880 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:20.138 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9356322-627f-4a63-9f15-1568d02903ea 00:10:20.138 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73574 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73574 00:10:20.395 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73574 Killed "${NVMF_APP[@]}" "$@" 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74393 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74393 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74393 ']' 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:20.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.395 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:20.396 14:26:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:20.691 [2024-07-15 14:26:59.992665] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:10:20.691 [2024-07-15 14:26:59.992789] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.691 [2024-07-15 14:27:00.130172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.691 [2024-07-15 14:27:00.193218] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.691 [2024-07-15 14:27:00.193300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.692 [2024-07-15 14:27:00.193313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.692 [2024-07-15 14:27:00.193321] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.692 [2024-07-15 14:27:00.193329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.692 [2024-07-15 14:27:00.193366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.622 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.622 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:10:21.622 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.622 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:21.622 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:21.622 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.622 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:21.878 [2024-07-15 14:27:01.336850] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:21.878 [2024-07-15 14:27:01.337087] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:21.878 [2024-07-15 14:27:01.337322] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:21.878 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:21.878 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fe4924de-d2ac-4c2a-aedf-ac9052a110a7 00:10:21.878 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=fe4924de-d2ac-4c2a-aedf-ac9052a110a7 00:10:21.878 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:21.878 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:21.878 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:21.878 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:21.878 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:22.134 14:27:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe4924de-d2ac-4c2a-aedf-ac9052a110a7 -t 2000 00:10:22.701 [ 00:10:22.701 { 00:10:22.701 "aliases": [ 00:10:22.701 "lvs/lvol" 00:10:22.701 ], 00:10:22.701 "assigned_rate_limits": { 00:10:22.701 "r_mbytes_per_sec": 0, 00:10:22.701 "rw_ios_per_sec": 0, 00:10:22.701 "rw_mbytes_per_sec": 0, 00:10:22.701 "w_mbytes_per_sec": 0 00:10:22.701 }, 00:10:22.701 "block_size": 4096, 00:10:22.701 "claimed": false, 00:10:22.701 "driver_specific": { 00:10:22.701 "lvol": { 00:10:22.701 "base_bdev": "aio_bdev", 00:10:22.701 "clone": false, 00:10:22.701 "esnap_clone": false, 00:10:22.701 "lvol_store_uuid": "f9356322-627f-4a63-9f15-1568d02903ea", 00:10:22.701 "num_allocated_clusters": 38, 00:10:22.701 "snapshot": false, 00:10:22.701 "thin_provision": false 00:10:22.701 } 00:10:22.701 }, 00:10:22.701 "name": "fe4924de-d2ac-4c2a-aedf-ac9052a110a7", 00:10:22.701 "num_blocks": 38912, 00:10:22.701 "product_name": "Logical Volume", 00:10:22.701 "supported_io_types": { 00:10:22.701 "abort": false, 00:10:22.701 "compare": false, 00:10:22.701 "compare_and_write": false, 00:10:22.701 "copy": false, 00:10:22.701 "flush": false, 00:10:22.701 "get_zone_info": false, 00:10:22.701 "nvme_admin": false, 00:10:22.701 "nvme_io": false, 00:10:22.701 "nvme_io_md": false, 00:10:22.701 "nvme_iov_md": false, 00:10:22.701 "read": true, 00:10:22.701 "reset": true, 00:10:22.701 "seek_data": true, 00:10:22.701 "seek_hole": true, 00:10:22.701 "unmap": true, 00:10:22.701 "write": true, 00:10:22.701 "write_zeroes": true, 00:10:22.701 "zcopy": false, 00:10:22.701 "zone_append": false, 00:10:22.701 "zone_management": false 00:10:22.701 }, 00:10:22.701 "uuid": "fe4924de-d2ac-4c2a-aedf-ac9052a110a7", 00:10:22.701 "zoned": false 00:10:22.701 } 00:10:22.701 ] 00:10:22.701 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:22.701 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9356322-627f-4a63-9f15-1568d02903ea 00:10:22.701 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:22.960 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:22.960 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9356322-627f-4a63-9f15-1568d02903ea 00:10:22.960 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:23.218 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:23.219 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:23.477 [2024-07-15 14:27:02.890515] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:23.477 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9356322-627f-4a63-9f15-1568d02903ea 00:10:23.477 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:10:23.477 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9356322-627f-4a63-9f15-1568d02903ea 00:10:23.477 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:23.477 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:23.477 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:23.477 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:23.477 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:23.477 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:23.477 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:23.477 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:23.477 14:27:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9356322-627f-4a63-9f15-1568d02903ea 00:10:23.735 2024/07/15 14:27:03 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:f9356322-627f-4a63-9f15-1568d02903ea], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:10:23.735 request: 00:10:23.735 { 00:10:23.735 "method": "bdev_lvol_get_lvstores", 00:10:23.735 "params": { 00:10:23.735 "uuid": "f9356322-627f-4a63-9f15-1568d02903ea" 00:10:23.735 } 00:10:23.735 } 00:10:23.735 Got JSON-RPC error response 00:10:23.735 GoRPCClient: error on JSON-RPC call 00:10:23.735 14:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:10:23.735 14:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:23.735 14:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:23.735 14:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:23.735 14:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:23.995 aio_bdev 00:10:23.995 14:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fe4924de-d2ac-4c2a-aedf-ac9052a110a7 00:10:23.995 14:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=fe4924de-d2ac-4c2a-aedf-ac9052a110a7 00:10:23.995 14:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:23.995 14:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:10:23.995 14:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:23.995 14:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:23.995 14:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:24.253 14:27:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe4924de-d2ac-4c2a-aedf-ac9052a110a7 -t 2000 00:10:24.512 [ 00:10:24.512 { 00:10:24.512 "aliases": [ 00:10:24.512 "lvs/lvol" 00:10:24.512 ], 00:10:24.512 "assigned_rate_limits": { 00:10:24.512 "r_mbytes_per_sec": 0, 00:10:24.512 "rw_ios_per_sec": 0, 00:10:24.512 "rw_mbytes_per_sec": 0, 00:10:24.512 "w_mbytes_per_sec": 0 00:10:24.512 }, 00:10:24.512 "block_size": 4096, 00:10:24.512 "claimed": false, 00:10:24.512 "driver_specific": { 00:10:24.512 "lvol": { 00:10:24.512 "base_bdev": "aio_bdev", 00:10:24.512 "clone": false, 00:10:24.512 "esnap_clone": false, 00:10:24.512 "lvol_store_uuid": "f9356322-627f-4a63-9f15-1568d02903ea", 00:10:24.512 "num_allocated_clusters": 38, 00:10:24.512 "snapshot": false, 00:10:24.512 "thin_provision": false 00:10:24.512 } 00:10:24.512 }, 00:10:24.512 "name": "fe4924de-d2ac-4c2a-aedf-ac9052a110a7", 00:10:24.512 "num_blocks": 38912, 00:10:24.512 "product_name": "Logical Volume", 00:10:24.512 "supported_io_types": { 00:10:24.512 "abort": false, 00:10:24.512 "compare": false, 00:10:24.512 "compare_and_write": false, 00:10:24.512 "copy": false, 00:10:24.512 "flush": false, 00:10:24.512 "get_zone_info": false, 00:10:24.512 "nvme_admin": false, 00:10:24.512 "nvme_io": false, 00:10:24.512 "nvme_io_md": false, 00:10:24.512 "nvme_iov_md": false, 00:10:24.512 "read": true, 00:10:24.512 "reset": true, 00:10:24.512 "seek_data": true, 00:10:24.512 "seek_hole": true, 00:10:24.512 "unmap": true, 00:10:24.512 "write": true, 00:10:24.512 "write_zeroes": true, 00:10:24.512 "zcopy": false, 00:10:24.512 "zone_append": false, 00:10:24.512 "zone_management": false 00:10:24.512 }, 00:10:24.512 "uuid": "fe4924de-d2ac-4c2a-aedf-ac9052a110a7", 00:10:24.512 "zoned": false 00:10:24.512 } 00:10:24.512 ] 00:10:24.512 14:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:10:24.512 14:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9356322-627f-4a63-9f15-1568d02903ea 00:10:24.512 14:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:24.770 14:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:24.770 14:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:24.770 14:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9356322-627f-4a63-9f15-1568d02903ea 00:10:25.028 14:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:25.028 14:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fe4924de-d2ac-4c2a-aedf-ac9052a110a7 00:10:25.286 14:27:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9356322-627f-4a63-9f15-1568d02903ea 00:10:25.545 14:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:26.111 14:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:26.370 ************************************ 00:10:26.370 END TEST lvs_grow_dirty 00:10:26.370 ************************************ 00:10:26.370 00:10:26.370 real 0m21.586s 00:10:26.370 user 0m43.749s 00:10:26.370 sys 0m7.956s 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:26.370 nvmf_trace.0 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:26.370 14:27:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:26.629 rmmod nvme_tcp 00:10:26.629 rmmod nvme_fabrics 00:10:26.629 rmmod nvme_keyring 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74393 ']' 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74393 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74393 ']' 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74393 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74393 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:26.629 killing process with pid 74393 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74393' 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74393 00:10:26.629 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74393 00:10:26.905 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:26.905 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:26.905 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:26.905 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:26.905 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:26.905 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.905 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:26.905 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.905 14:27:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:26.905 00:10:26.905 real 0m41.892s 00:10:26.905 user 1m8.617s 00:10:26.905 sys 0m10.750s 00:10:26.905 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.905 14:27:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:26.905 ************************************ 00:10:26.905 END TEST nvmf_lvs_grow 00:10:26.905 ************************************ 00:10:26.905 14:27:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:26.905 14:27:06 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:26.905 14:27:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:26.905 14:27:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.905 14:27:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:26.905 ************************************ 00:10:26.905 START TEST nvmf_bdev_io_wait 00:10:26.905 ************************************ 00:10:26.905 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:26.905 * Looking for test storage... 00:10:26.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:26.905 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:26.905 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:26.905 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.905 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.905 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.905 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.905 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.905 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.905 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.905 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.905 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:27.164 Cannot find device "nvmf_tgt_br" 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:27.164 Cannot find device "nvmf_tgt_br2" 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:27.164 Cannot find device "nvmf_tgt_br" 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:27.164 Cannot find device "nvmf_tgt_br2" 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:27.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:27.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:27.164 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:27.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:10:27.423 00:10:27.423 --- 10.0.0.2 ping statistics --- 00:10:27.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.423 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:27.423 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:27.423 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:10:27.423 00:10:27.423 --- 10.0.0.3 ping statistics --- 00:10:27.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.423 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:27.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:10:27.423 00:10:27.423 --- 10.0.0.1 ping statistics --- 00:10:27.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.423 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.423 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=74805 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 74805 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 74805 ']' 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:27.424 14:27:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:27.424 [2024-07-15 14:27:06.944393] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:10:27.424 [2024-07-15 14:27:06.944501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.682 [2024-07-15 14:27:07.084052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.683 [2024-07-15 14:27:07.144867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.683 [2024-07-15 14:27:07.144929] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.683 [2024-07-15 14:27:07.144941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.683 [2024-07-15 14:27:07.144949] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.683 [2024-07-15 14:27:07.144957] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.683 [2024-07-15 14:27:07.145224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.683 [2024-07-15 14:27:07.145485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.683 [2024-07-15 14:27:07.146780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.683 [2024-07-15 14:27:07.146794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.677 14:27:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:28.677 14:27:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:10:28.678 14:27:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:28.678 14:27:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:28.678 14:27:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.678 14:27:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.678 [2024-07-15 14:27:08.060019] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.678 Malloc0 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.678 [2024-07-15 14:27:08.111464] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74864 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=74866 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:28.678 { 00:10:28.678 "params": { 00:10:28.678 "name": "Nvme$subsystem", 00:10:28.678 "trtype": "$TEST_TRANSPORT", 00:10:28.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.678 "adrfam": "ipv4", 00:10:28.678 "trsvcid": "$NVMF_PORT", 00:10:28.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.678 "hdgst": ${hdgst:-false}, 00:10:28.678 "ddgst": ${ddgst:-false} 00:10:28.678 }, 00:10:28.678 "method": "bdev_nvme_attach_controller" 00:10:28.678 } 00:10:28.678 EOF 00:10:28.678 )") 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74868 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74870 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:28.678 { 00:10:28.678 "params": { 00:10:28.678 "name": "Nvme$subsystem", 00:10:28.678 "trtype": "$TEST_TRANSPORT", 00:10:28.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.678 "adrfam": "ipv4", 00:10:28.678 "trsvcid": "$NVMF_PORT", 00:10:28.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.678 "hdgst": ${hdgst:-false}, 00:10:28.678 "ddgst": ${ddgst:-false} 00:10:28.678 }, 00:10:28.678 "method": "bdev_nvme_attach_controller" 00:10:28.678 } 00:10:28.678 EOF 00:10:28.678 )") 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:28.678 { 00:10:28.678 "params": { 00:10:28.678 "name": "Nvme$subsystem", 00:10:28.678 "trtype": "$TEST_TRANSPORT", 00:10:28.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.678 "adrfam": "ipv4", 00:10:28.678 "trsvcid": "$NVMF_PORT", 00:10:28.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.678 "hdgst": ${hdgst:-false}, 00:10:28.678 "ddgst": ${ddgst:-false} 00:10:28.678 }, 00:10:28.678 "method": "bdev_nvme_attach_controller" 00:10:28.678 } 00:10:28.678 EOF 00:10:28.678 )") 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:28.678 { 00:10:28.678 "params": { 00:10:28.678 "name": "Nvme$subsystem", 00:10:28.678 "trtype": "$TEST_TRANSPORT", 00:10:28.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.678 "adrfam": "ipv4", 00:10:28.678 "trsvcid": "$NVMF_PORT", 00:10:28.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.678 "hdgst": ${hdgst:-false}, 00:10:28.678 "ddgst": ${ddgst:-false} 00:10:28.678 }, 00:10:28.678 "method": "bdev_nvme_attach_controller" 00:10:28.678 } 00:10:28.678 EOF 00:10:28.678 )") 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:28.678 "params": { 00:10:28.678 "name": "Nvme1", 00:10:28.678 "trtype": "tcp", 00:10:28.678 "traddr": "10.0.0.2", 00:10:28.678 "adrfam": "ipv4", 00:10:28.678 "trsvcid": "4420", 00:10:28.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.678 "hdgst": false, 00:10:28.678 "ddgst": false 00:10:28.678 }, 00:10:28.678 "method": "bdev_nvme_attach_controller" 00:10:28.678 }' 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:28.678 "params": { 00:10:28.678 "name": "Nvme1", 00:10:28.678 "trtype": "tcp", 00:10:28.678 "traddr": "10.0.0.2", 00:10:28.678 "adrfam": "ipv4", 00:10:28.678 "trsvcid": "4420", 00:10:28.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.678 "hdgst": false, 00:10:28.678 "ddgst": false 00:10:28.678 }, 00:10:28.678 "method": "bdev_nvme_attach_controller" 00:10:28.678 }' 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:28.678 "params": { 00:10:28.678 "name": "Nvme1", 00:10:28.678 "trtype": "tcp", 00:10:28.678 "traddr": "10.0.0.2", 00:10:28.678 "adrfam": "ipv4", 00:10:28.678 "trsvcid": "4420", 00:10:28.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.678 "hdgst": false, 00:10:28.678 "ddgst": false 00:10:28.678 }, 00:10:28.678 "method": "bdev_nvme_attach_controller" 00:10:28.678 }' 00:10:28.678 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:28.679 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:28.679 "params": { 00:10:28.679 "name": "Nvme1", 00:10:28.679 "trtype": "tcp", 00:10:28.679 "traddr": "10.0.0.2", 00:10:28.679 "adrfam": "ipv4", 00:10:28.679 "trsvcid": "4420", 00:10:28.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.679 "hdgst": false, 00:10:28.679 "ddgst": false 00:10:28.679 }, 00:10:28.679 "method": "bdev_nvme_attach_controller" 00:10:28.679 }' 00:10:28.679 [2024-07-15 14:27:08.178794] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:10:28.679 [2024-07-15 14:27:08.178887] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:28.679 14:27:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 74864 00:10:28.679 [2024-07-15 14:27:08.202952] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:10:28.679 [2024-07-15 14:27:08.203040] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:28.679 [2024-07-15 14:27:08.222843] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:10:28.679 [2024-07-15 14:27:08.222950] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:28.679 [2024-07-15 14:27:08.230764] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:10:28.679 [2024-07-15 14:27:08.230910] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:28.938 [2024-07-15 14:27:08.357498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.938 [2024-07-15 14:27:08.402449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.938 [2024-07-15 14:27:08.412128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:28.938 [2024-07-15 14:27:08.454561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.938 [2024-07-15 14:27:08.456653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:28.938 [2024-07-15 14:27:08.497903] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.938 [2024-07-15 14:27:08.508255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:29.196 Running I/O for 1 seconds... 00:10:29.196 [2024-07-15 14:27:08.551524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:29.196 Running I/O for 1 seconds... 00:10:29.196 Running I/O for 1 seconds... 00:10:29.196 Running I/O for 1 seconds... 00:10:30.131 00:10:30.131 Latency(us) 00:10:30.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.131 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:30.131 Nvme1n1 : 1.02 6502.37 25.40 0.00 0.00 19394.24 7685.59 34555.35 00:10:30.131 =================================================================================================================== 00:10:30.131 Total : 6502.37 25.40 0.00 0.00 19394.24 7685.59 34555.35 00:10:30.131 00:10:30.131 Latency(us) 00:10:30.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.131 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:30.131 Nvme1n1 : 1.00 158651.53 619.73 0.00 0.00 803.53 413.32 1318.17 00:10:30.131 =================================================================================================================== 00:10:30.131 Total : 158651.53 619.73 0.00 0.00 803.53 413.32 1318.17 00:10:30.131 00:10:30.131 Latency(us) 00:10:30.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.131 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:30.131 Nvme1n1 : 1.01 8103.85 31.66 0.00 0.00 15700.48 9115.46 24903.68 00:10:30.131 =================================================================================================================== 00:10:30.131 Total : 8103.85 31.66 0.00 0.00 15700.48 9115.46 24903.68 00:10:30.131 00:10:30.131 Latency(us) 00:10:30.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.131 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:30.131 Nvme1n1 : 1.01 6419.29 25.08 0.00 0.00 19866.13 6881.28 43849.54 00:10:30.131 =================================================================================================================== 00:10:30.131 Total : 6419.29 25.08 0.00 0.00 19866.13 6881.28 43849.54 00:10:30.389 14:27:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 74866 00:10:30.389 14:27:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 74868 00:10:30.389 14:27:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 74870 00:10:30.389 14:27:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.389 14:27:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.389 14:27:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:30.389 14:27:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.389 14:27:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:30.389 14:27:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:30.389 14:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:30.389 14:27:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:30.647 rmmod nvme_tcp 00:10:30.647 rmmod nvme_fabrics 00:10:30.647 rmmod nvme_keyring 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 74805 ']' 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 74805 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 74805 ']' 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 74805 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74805 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:30.647 killing process with pid 74805 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74805' 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 74805 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 74805 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.647 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.907 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:30.907 00:10:30.907 real 0m3.836s 00:10:30.907 user 0m17.042s 00:10:30.907 sys 0m1.796s 00:10:30.907 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.907 14:27:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:30.907 ************************************ 00:10:30.907 END TEST nvmf_bdev_io_wait 00:10:30.907 ************************************ 00:10:30.907 14:27:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:30.907 14:27:10 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:30.907 14:27:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:30.907 14:27:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.907 14:27:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:30.907 ************************************ 00:10:30.907 START TEST nvmf_queue_depth 00:10:30.907 ************************************ 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:30.907 * Looking for test storage... 00:10:30.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:30.907 14:27:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:30.908 Cannot find device "nvmf_tgt_br" 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:30.908 Cannot find device "nvmf_tgt_br2" 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:30.908 Cannot find device "nvmf_tgt_br" 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:30.908 Cannot find device "nvmf_tgt_br2" 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:10:30.908 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:31.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:31.167 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:31.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:10:31.167 00:10:31.167 --- 10.0.0.2 ping statistics --- 00:10:31.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.167 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:31.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:31.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:10:31.167 00:10:31.167 --- 10.0.0.3 ping statistics --- 00:10:31.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.167 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:31.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:10:31.167 00:10:31.167 --- 10.0.0.1 ping statistics --- 00:10:31.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.167 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:31.167 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:31.426 14:27:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:31.426 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:31.426 14:27:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:31.426 14:27:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.426 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=75094 00:10:31.426 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:31.426 14:27:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 75094 00:10:31.426 14:27:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75094 ']' 00:10:31.426 14:27:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.426 14:27:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.426 14:27:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.426 14:27:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.426 14:27:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.426 [2024-07-15 14:27:10.832225] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:10:31.426 [2024-07-15 14:27:10.832331] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.426 [2024-07-15 14:27:10.965665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.684 [2024-07-15 14:27:11.026495] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.684 [2024-07-15 14:27:11.026563] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.685 [2024-07-15 14:27:11.026575] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.685 [2024-07-15 14:27:11.026584] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.685 [2024-07-15 14:27:11.026592] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.685 [2024-07-15 14:27:11.026629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.685 [2024-07-15 14:27:11.158786] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.685 Malloc0 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.685 [2024-07-15 14:27:11.223979] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=75136 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 75136 /var/tmp/bdevperf.sock 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75136 ']' 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:31.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:31.685 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.943 [2024-07-15 14:27:11.286006] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:10:31.943 [2024-07-15 14:27:11.286113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75136 ] 00:10:31.943 [2024-07-15 14:27:11.426077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.943 [2024-07-15 14:27:11.485895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.201 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.201 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:32.201 14:27:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:32.201 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.201 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:32.201 NVMe0n1 00:10:32.201 14:27:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.201 14:27:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:32.201 Running I/O for 10 seconds... 00:10:44.406 00:10:44.406 Latency(us) 00:10:44.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.406 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:44.406 Verification LBA range: start 0x0 length 0x4000 00:10:44.406 NVMe0n1 : 10.08 8306.20 32.45 0.00 0.00 122762.59 27763.43 120586.24 00:10:44.406 =================================================================================================================== 00:10:44.406 Total : 8306.20 32.45 0.00 0.00 122762.59 27763.43 120586.24 00:10:44.406 0 00:10:44.406 14:27:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 75136 00:10:44.406 14:27:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75136 ']' 00:10:44.406 14:27:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75136 00:10:44.406 14:27:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:44.406 14:27:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.406 14:27:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75136 00:10:44.406 14:27:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:44.406 14:27:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:44.406 14:27:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75136' 00:10:44.406 killing process with pid 75136 00:10:44.406 Received shutdown signal, test time was about 10.000000 seconds 00:10:44.406 00:10:44.406 Latency(us) 00:10:44.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.406 =================================================================================================================== 00:10:44.406 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:44.406 14:27:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75136 00:10:44.406 14:27:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75136 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:44.406 rmmod nvme_tcp 00:10:44.406 rmmod nvme_fabrics 00:10:44.406 rmmod nvme_keyring 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 75094 ']' 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 75094 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75094 ']' 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75094 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75094 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:44.406 killing process with pid 75094 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75094' 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75094 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75094 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:44.406 00:10:44.406 real 0m12.084s 00:10:44.406 user 0m20.971s 00:10:44.406 sys 0m1.896s 00:10:44.406 ************************************ 00:10:44.406 END TEST nvmf_queue_depth 00:10:44.406 ************************************ 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.406 14:27:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 14:27:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:44.406 14:27:22 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:44.406 14:27:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:44.406 14:27:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.406 14:27:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 ************************************ 00:10:44.406 START TEST nvmf_target_multipath 00:10:44.406 ************************************ 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:44.406 * Looking for test storage... 00:10:44.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.406 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:44.407 Cannot find device "nvmf_tgt_br" 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.407 Cannot find device "nvmf_tgt_br2" 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:44.407 Cannot find device "nvmf_tgt_br" 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:44.407 Cannot find device "nvmf_tgt_br2" 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:44.407 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:44.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:10:44.407 00:10:44.407 --- 10.0.0.2 ping statistics --- 00:10:44.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.407 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:44.407 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:44.407 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:10:44.407 00:10:44.407 --- 10.0.0.3 ping statistics --- 00:10:44.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.407 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:44.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:44.407 00:10:44.407 --- 10.0.0.1 ping statistics --- 00:10:44.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.407 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75443 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75443 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75443 ']' 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:44.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:44.407 14:27:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:44.407 [2024-07-15 14:27:22.988303] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:10:44.407 [2024-07-15 14:27:22.988985] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.407 [2024-07-15 14:27:23.131069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.407 [2024-07-15 14:27:23.201870] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.408 [2024-07-15 14:27:23.201925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.408 [2024-07-15 14:27:23.201939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.408 [2024-07-15 14:27:23.201949] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.408 [2024-07-15 14:27:23.201959] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.408 [2024-07-15 14:27:23.202029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.408 [2024-07-15 14:27:23.202099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.408 [2024-07-15 14:27:23.202483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.408 [2024-07-15 14:27:23.202521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.408 14:27:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:44.408 14:27:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:10:44.408 14:27:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:44.408 14:27:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:44.408 14:27:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:44.665 14:27:24 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.665 14:27:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:44.665 [2024-07-15 14:27:24.240143] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.922 14:27:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:44.922 Malloc0 00:10:45.180 14:27:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:45.180 14:27:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.779 14:27:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.779 [2024-07-15 14:27:25.310322] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.779 14:27:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:46.060 [2024-07-15 14:27:25.554555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:46.060 14:27:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:46.319 14:27:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:46.577 14:27:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:46.577 14:27:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:46.577 14:27:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.577 14:27:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:46.577 14:27:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:48.480 14:27:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:48.480 14:27:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.480 14:27:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75586 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:48.480 14:27:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:48.480 [global] 00:10:48.480 thread=1 00:10:48.480 invalidate=1 00:10:48.480 rw=randrw 00:10:48.480 time_based=1 00:10:48.480 runtime=6 00:10:48.480 ioengine=libaio 00:10:48.480 direct=1 00:10:48.480 bs=4096 00:10:48.480 iodepth=128 00:10:48.480 norandommap=0 00:10:48.480 numjobs=1 00:10:48.480 00:10:48.480 verify_dump=1 00:10:48.480 verify_backlog=512 00:10:48.480 verify_state_save=0 00:10:48.480 do_verify=1 00:10:48.480 verify=crc32c-intel 00:10:48.480 [job0] 00:10:48.480 filename=/dev/nvme0n1 00:10:48.480 Could not set queue depth (nvme0n1) 00:10:48.739 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:48.739 fio-3.35 00:10:48.739 Starting 1 thread 00:10:49.672 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:49.930 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:50.187 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:50.187 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:50.187 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:50.187 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:50.187 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:50.187 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:50.187 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:50.187 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:50.187 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:50.187 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:50.187 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:50.187 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:50.187 14:27:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:51.122 14:27:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:51.122 14:27:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:51.122 14:27:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:51.122 14:27:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:51.380 14:27:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:51.645 14:27:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:51.645 14:27:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:51.645 14:27:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:51.645 14:27:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:51.645 14:27:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:51.645 14:27:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:51.645 14:27:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:51.645 14:27:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:51.645 14:27:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:51.645 14:27:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:51.645 14:27:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:51.645 14:27:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:51.645 14:27:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:52.581 14:27:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:52.581 14:27:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:52.581 14:27:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:52.581 14:27:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75586 00:10:55.112 00:10:55.112 job0: (groupid=0, jobs=1): err= 0: pid=75607: Mon Jul 15 14:27:34 2024 00:10:55.112 read: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(248MiB/6006msec) 00:10:55.112 slat (usec): min=2, max=8196, avg=53.57, stdev=236.93 00:10:55.112 clat (usec): min=828, max=23775, avg=8164.65, stdev=1479.56 00:10:55.112 lat (usec): min=869, max=25350, avg=8218.23, stdev=1490.34 00:10:55.112 clat percentiles (usec): 00:10:55.112 | 1.00th=[ 4883], 5.00th=[ 6194], 10.00th=[ 6915], 20.00th=[ 7308], 00:10:55.112 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8225], 00:10:55.112 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[10683], 00:10:55.112 | 99.00th=[13566], 99.50th=[14877], 99.90th=[18220], 99.95th=[20841], 00:10:55.112 | 99.99th=[23462] 00:10:55.112 bw ( KiB/s): min= 5440, max=30632, per=54.04%, avg=22851.00, stdev=6949.44, samples=11 00:10:55.112 iops : min= 1360, max= 7658, avg=5712.73, stdev=1737.36, samples=11 00:10:55.112 write: IOPS=6429, BW=25.1MiB/s (26.3MB/s)(135MiB/5374msec); 0 zone resets 00:10:55.112 slat (usec): min=3, max=5629, avg=64.64, stdev=161.23 00:10:55.112 clat (usec): min=766, max=23264, avg=7038.16, stdev=1278.73 00:10:55.112 lat (usec): min=799, max=23297, avg=7102.79, stdev=1285.63 00:10:55.112 clat percentiles (usec): 00:10:55.112 | 1.00th=[ 3720], 5.00th=[ 4948], 10.00th=[ 5866], 20.00th=[ 6325], 00:10:55.112 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7177], 00:10:55.112 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8160], 95.00th=[ 8848], 00:10:55.112 | 99.00th=[11731], 99.50th=[12649], 99.90th=[13960], 99.95th=[14877], 00:10:55.112 | 99.99th=[23200] 00:10:55.112 bw ( KiB/s): min= 5872, max=30168, per=89.00%, avg=22888.55, stdev=6643.69, samples=11 00:10:55.112 iops : min= 1468, max= 7542, avg=5722.09, stdev=1660.91, samples=11 00:10:55.112 lat (usec) : 1000=0.01% 00:10:55.112 lat (msec) : 2=0.07%, 4=0.66%, 10=93.30%, 20=5.91%, 50=0.05% 00:10:55.112 cpu : usr=5.91%, sys=24.08%, ctx=6514, majf=0, minf=121 00:10:55.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:55.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.112 issued rwts: total=63492,34550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.112 00:10:55.112 Run status group 0 (all jobs): 00:10:55.112 READ: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=248MiB (260MB), run=6006-6006msec 00:10:55.112 WRITE: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=135MiB (142MB), run=5374-5374msec 00:10:55.112 00:10:55.112 Disk stats (read/write): 00:10:55.112 nvme0n1: ios=62559/33916, merge=0/0, ticks=477501/221632, in_queue=699133, util=98.60% 00:10:55.112 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:55.112 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:55.371 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:55.371 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:55.371 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:55.371 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:55.371 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:55.371 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:55.371 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:55.371 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:55.371 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:55.371 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:55.371 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:55.371 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:10:55.371 14:27:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:56.304 14:27:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:56.304 14:27:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:56.304 14:27:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:56.304 14:27:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:56.304 14:27:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75739 00:10:56.304 14:27:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:56.304 14:27:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:56.304 [global] 00:10:56.304 thread=1 00:10:56.304 invalidate=1 00:10:56.304 rw=randrw 00:10:56.304 time_based=1 00:10:56.304 runtime=6 00:10:56.304 ioengine=libaio 00:10:56.304 direct=1 00:10:56.304 bs=4096 00:10:56.304 iodepth=128 00:10:56.304 norandommap=0 00:10:56.304 numjobs=1 00:10:56.304 00:10:56.304 verify_dump=1 00:10:56.304 verify_backlog=512 00:10:56.304 verify_state_save=0 00:10:56.304 do_verify=1 00:10:56.304 verify=crc32c-intel 00:10:56.304 [job0] 00:10:56.304 filename=/dev/nvme0n1 00:10:56.304 Could not set queue depth (nvme0n1) 00:10:56.563 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.563 fio-3.35 00:10:56.563 Starting 1 thread 00:10:57.499 14:27:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:57.758 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:58.022 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:58.022 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:58.022 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:58.022 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:58.022 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:58.022 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:58.022 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:58.022 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:58.022 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:58.022 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:58.022 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:58.022 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:58.022 14:27:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:58.955 14:27:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:58.955 14:27:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:58.955 14:27:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:58.955 14:27:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:59.518 14:27:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:59.776 14:27:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:59.776 14:27:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:59.776 14:27:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:59.776 14:27:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:59.776 14:27:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:59.776 14:27:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:59.776 14:27:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:59.776 14:27:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:59.776 14:27:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:59.776 14:27:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:59.776 14:27:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:59.776 14:27:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:59.776 14:27:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:11:00.705 14:27:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:11:00.705 14:27:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:00.705 14:27:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:00.705 14:27:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75739 00:11:02.597 00:11:02.597 job0: (groupid=0, jobs=1): err= 0: pid=75760: Mon Jul 15 14:27:42 2024 00:11:02.597 read: IOPS=11.8k, BW=46.1MiB/s (48.3MB/s)(276MiB/6002msec) 00:11:02.597 slat (usec): min=4, max=5340, avg=42.98, stdev=204.37 00:11:02.597 clat (usec): min=287, max=17073, avg=7407.74, stdev=1894.91 00:11:02.597 lat (usec): min=304, max=17090, avg=7450.71, stdev=1910.29 00:11:02.597 clat percentiles (usec): 00:11:02.597 | 1.00th=[ 2024], 5.00th=[ 4015], 10.00th=[ 4817], 20.00th=[ 5932], 00:11:02.597 | 30.00th=[ 6980], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7832], 00:11:02.597 | 70.00th=[ 8225], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[10421], 00:11:02.597 | 99.00th=[11994], 99.50th=[12518], 99.90th=[14091], 99.95th=[14615], 00:11:02.597 | 99.99th=[15664] 00:11:02.597 bw ( KiB/s): min= 8032, max=40496, per=54.51%, avg=25709.73, stdev=9463.56, samples=11 00:11:02.597 iops : min= 2008, max=10124, avg=6427.36, stdev=2365.90, samples=11 00:11:02.597 write: IOPS=7212, BW=28.2MiB/s (29.5MB/s)(150MiB/5324msec); 0 zone resets 00:11:02.597 slat (usec): min=7, max=2158, avg=55.30, stdev=132.54 00:11:02.597 clat (usec): min=312, max=13791, avg=6184.11, stdev=1792.70 00:11:02.597 lat (usec): min=355, max=13837, avg=6239.41, stdev=1805.24 00:11:02.597 clat percentiles (usec): 00:11:02.597 | 1.00th=[ 1647], 5.00th=[ 3097], 10.00th=[ 3654], 20.00th=[ 4490], 00:11:02.597 | 30.00th=[ 5342], 40.00th=[ 6194], 50.00th=[ 6587], 60.00th=[ 6915], 00:11:02.597 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7898], 95.00th=[ 8979], 00:11:02.597 | 99.00th=[10421], 99.50th=[10814], 99.90th=[12125], 99.95th=[12780], 00:11:02.597 | 99.99th=[13304] 00:11:02.597 bw ( KiB/s): min= 8192, max=40960, per=89.09%, avg=25703.91, stdev=9286.22, samples=11 00:11:02.597 iops : min= 2048, max=10240, avg=6425.91, stdev=2321.57, samples=11 00:11:02.597 lat (usec) : 500=0.03%, 750=0.10%, 1000=0.15% 00:11:02.597 lat (msec) : 2=0.84%, 4=7.02%, 10=87.14%, 20=4.72% 00:11:02.597 cpu : usr=5.92%, sys=26.91%, ctx=8016, majf=0, minf=108 00:11:02.597 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:11:02.597 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.597 issued rwts: total=70773,38400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.597 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.597 00:11:02.597 Run status group 0 (all jobs): 00:11:02.597 READ: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=276MiB (290MB), run=6002-6002msec 00:11:02.597 WRITE: bw=28.2MiB/s (29.5MB/s), 28.2MiB/s-28.2MiB/s (29.5MB/s-29.5MB/s), io=150MiB (157MB), run=5324-5324msec 00:11:02.597 00:11:02.597 Disk stats (read/write): 00:11:02.597 nvme0n1: ios=69832/37807, merge=0/0, ticks=476288/210102, in_queue=686390, util=98.60% 00:11:02.597 14:27:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:02.855 14:27:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.855 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:11:02.855 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:02.855 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.856 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:02.856 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.856 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:11:02.856 14:27:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:03.114 rmmod nvme_tcp 00:11:03.114 rmmod nvme_fabrics 00:11:03.114 rmmod nvme_keyring 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75443 ']' 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75443 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75443 ']' 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75443 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75443 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:03.114 killing process with pid 75443 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75443' 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75443 00:11:03.114 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75443 00:11:03.372 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:03.372 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:03.372 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:03.372 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.372 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:03.372 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.372 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.372 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.372 14:27:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:03.372 00:11:03.372 real 0m20.389s 00:11:03.372 user 1m20.312s 00:11:03.372 sys 0m6.624s 00:11:03.372 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:03.372 14:27:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:03.372 ************************************ 00:11:03.372 END TEST nvmf_target_multipath 00:11:03.372 ************************************ 00:11:03.372 14:27:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:03.372 14:27:42 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:03.372 14:27:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:03.372 14:27:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.372 14:27:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:03.372 ************************************ 00:11:03.372 START TEST nvmf_zcopy 00:11:03.372 ************************************ 00:11:03.372 14:27:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:03.372 * Looking for test storage... 00:11:03.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:03.632 14:27:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:03.632 Cannot find device "nvmf_tgt_br" 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.632 Cannot find device "nvmf_tgt_br2" 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:03.632 Cannot find device "nvmf_tgt_br" 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:03.632 Cannot find device "nvmf_tgt_br2" 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:03.632 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:03.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:11:03.899 00:11:03.899 --- 10.0.0.2 ping statistics --- 00:11:03.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.899 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:03.899 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.899 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:11:03.899 00:11:03.899 --- 10.0.0.3 ping statistics --- 00:11:03.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.899 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:11:03.899 00:11:03.899 --- 10.0.0.1 ping statistics --- 00:11:03.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.899 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=76039 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 76039 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 76039 ']' 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.899 14:27:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:03.899 [2024-07-15 14:27:43.436457] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:11:03.899 [2024-07-15 14:27:43.436574] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.156 [2024-07-15 14:27:43.597353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.156 [2024-07-15 14:27:43.677641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.156 [2024-07-15 14:27:43.677691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.156 [2024-07-15 14:27:43.677716] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.156 [2024-07-15 14:27:43.677725] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.156 [2024-07-15 14:27:43.677732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.156 [2024-07-15 14:27:43.677760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.091 [2024-07-15 14:27:44.460482] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.091 [2024-07-15 14:27:44.484664] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.091 malloc0 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:05.091 { 00:11:05.091 "params": { 00:11:05.091 "name": "Nvme$subsystem", 00:11:05.091 "trtype": "$TEST_TRANSPORT", 00:11:05.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:05.091 "adrfam": "ipv4", 00:11:05.091 "trsvcid": "$NVMF_PORT", 00:11:05.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:05.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:05.091 "hdgst": ${hdgst:-false}, 00:11:05.091 "ddgst": ${ddgst:-false} 00:11:05.091 }, 00:11:05.091 "method": "bdev_nvme_attach_controller" 00:11:05.091 } 00:11:05.091 EOF 00:11:05.091 )") 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:05.091 14:27:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:05.091 "params": { 00:11:05.091 "name": "Nvme1", 00:11:05.091 "trtype": "tcp", 00:11:05.091 "traddr": "10.0.0.2", 00:11:05.091 "adrfam": "ipv4", 00:11:05.091 "trsvcid": "4420", 00:11:05.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:05.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:05.091 "hdgst": false, 00:11:05.091 "ddgst": false 00:11:05.091 }, 00:11:05.091 "method": "bdev_nvme_attach_controller" 00:11:05.091 }' 00:11:05.091 [2024-07-15 14:27:44.574524] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:11:05.091 [2024-07-15 14:27:44.574635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76092 ] 00:11:05.348 [2024-07-15 14:27:44.712272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.348 [2024-07-15 14:27:44.776966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.348 Running I/O for 10 seconds... 00:11:15.371 00:11:15.371 Latency(us) 00:11:15.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.371 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:15.371 Verification LBA range: start 0x0 length 0x1000 00:11:15.371 Nvme1n1 : 10.02 5972.26 46.66 0.00 0.00 21361.30 3649.16 31695.59 00:11:15.371 =================================================================================================================== 00:11:15.371 Total : 5972.26 46.66 0.00 0.00 21361.30 3649.16 31695.59 00:11:15.650 14:27:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76210 00:11:15.650 14:27:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:15.650 14:27:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:15.650 14:27:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:15.650 14:27:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:15.650 14:27:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:11:15.650 14:27:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:11:15.650 14:27:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:15.650 14:27:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:15.650 { 00:11:15.650 "params": { 00:11:15.650 "name": "Nvme$subsystem", 00:11:15.650 "trtype": "$TEST_TRANSPORT", 00:11:15.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:15.650 "adrfam": "ipv4", 00:11:15.650 "trsvcid": "$NVMF_PORT", 00:11:15.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:15.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:15.650 "hdgst": ${hdgst:-false}, 00:11:15.650 "ddgst": ${ddgst:-false} 00:11:15.650 }, 00:11:15.650 "method": "bdev_nvme_attach_controller" 00:11:15.650 } 00:11:15.650 EOF 00:11:15.650 )") 00:11:15.650 [2024-07-15 14:27:55.109896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.650 [2024-07-15 14:27:55.109946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.650 14:27:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:11:15.650 14:27:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:11:15.650 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.650 14:27:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:11:15.650 14:27:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:15.650 "params": { 00:11:15.650 "name": "Nvme1", 00:11:15.650 "trtype": "tcp", 00:11:15.650 "traddr": "10.0.0.2", 00:11:15.650 "adrfam": "ipv4", 00:11:15.650 "trsvcid": "4420", 00:11:15.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:15.650 "hdgst": false, 00:11:15.650 "ddgst": false 00:11:15.650 }, 00:11:15.650 "method": "bdev_nvme_attach_controller" 00:11:15.650 }' 00:11:15.650 [2024-07-15 14:27:55.117900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.650 [2024-07-15 14:27:55.117938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.650 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.650 [2024-07-15 14:27:55.129910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.650 [2024-07-15 14:27:55.129952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.650 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.650 [2024-07-15 14:27:55.141898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.650 [2024-07-15 14:27:55.141936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.650 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.650 [2024-07-15 14:27:55.153900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.650 [2024-07-15 14:27:55.153940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.650 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.650 [2024-07-15 14:27:55.165924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.650 [2024-07-15 14:27:55.165969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.650 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.651 [2024-07-15 14:27:55.171914] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:11:15.651 [2024-07-15 14:27:55.172043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76210 ] 00:11:15.651 [2024-07-15 14:27:55.177902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.651 [2024-07-15 14:27:55.177934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.651 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.651 [2024-07-15 14:27:55.189914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.651 [2024-07-15 14:27:55.189949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.651 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.651 [2024-07-15 14:27:55.201915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.651 [2024-07-15 14:27:55.201958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.651 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.651 [2024-07-15 14:27:55.213911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.651 [2024-07-15 14:27:55.213952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.651 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.651 [2024-07-15 14:27:55.225921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.651 [2024-07-15 14:27:55.225961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.651 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.651 [2024-07-15 14:27:55.233901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.651 [2024-07-15 14:27:55.233935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.651 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.651 [2024-07-15 14:27:55.241918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.651 [2024-07-15 14:27:55.241958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.254085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.254149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.262003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.262039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.274010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.274047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.286013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.286050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.298012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.298047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.310048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.310108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 [2024-07-15 14:27:55.312551] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.322037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.322080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.334037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.334075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.346049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.346098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.358043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.358080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.370054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.370096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.374565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.910 [2024-07-15 14:27:55.382041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.382077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.394072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.394116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.406069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.406112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.418084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.418132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.434061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.434101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.442064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.442104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.450076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.450114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.458067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.458102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.466104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.466142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.478082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.478122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.910 [2024-07-15 14:27:55.490087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.910 [2024-07-15 14:27:55.490126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:15.910 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:15.911 [2024-07-15 14:27:55.502933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:15.911 [2024-07-15 14:27:55.502983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.169 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.169 Running I/O for 5 seconds... 00:11:16.169 [2024-07-15 14:27:55.514190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.169 [2024-07-15 14:27:55.514233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.522179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.522218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.535641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.535687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.546500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.546552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.561229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.561285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.571635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.571686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.586277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.586325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.602992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.603040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.618555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.618604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.629202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.629250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.643681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.643744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.661104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.661161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.676900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.676960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.694798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.694850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.710590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.710638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.727822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.727871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.745113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.745169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.170 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.170 [2024-07-15 14:27:55.760731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.170 [2024-07-15 14:27:55.760783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.429 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.429 [2024-07-15 14:27:55.776356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.429 [2024-07-15 14:27:55.776406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.792113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.792160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.802915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.802957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.817378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.817426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.827733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.827775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.843065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.843111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.860143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.860188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.875501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.875545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.885940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.885981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.900818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.900866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.918089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.918135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.935032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.935079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.950763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.950810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.960777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.960818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.976239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.976286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:55.985912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:55.985953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:56.001996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:56.002045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.430 [2024-07-15 14:27:56.011846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.430 [2024-07-15 14:27:56.011886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.430 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.689 [2024-07-15 14:27:56.026817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.689 [2024-07-15 14:27:56.026864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.689 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.689 [2024-07-15 14:27:56.045234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.689 [2024-07-15 14:27:56.045282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.689 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.689 [2024-07-15 14:27:56.060336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.689 [2024-07-15 14:27:56.060382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.689 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.689 [2024-07-15 14:27:56.070505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.689 [2024-07-15 14:27:56.070547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.689 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.085040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.085084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.097628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.097672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.106938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.106976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.123180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.123226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.140315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.140363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.155866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.155916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.166500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.166544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.181281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.181330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.198303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.198356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.213899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.213946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.224724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.224770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.239646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.239710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.256075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.256123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.690 [2024-07-15 14:27:56.273033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.690 [2024-07-15 14:27:56.273082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.690 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.288149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.288196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.949 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.298522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.298567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.949 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.313814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.313865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.949 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.330840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.330889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.949 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.346590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.346640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.949 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.357203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.357251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.949 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.371980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.372030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.949 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.382790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.382837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.949 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.397673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.397735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.949 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.407805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.407857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.949 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.422818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.422864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.949 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.432408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.432467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.949 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.447278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.447328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.949 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.949 [2024-07-15 14:27:56.465281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.949 [2024-07-15 14:27:56.465329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.950 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.950 [2024-07-15 14:27:56.480451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.950 [2024-07-15 14:27:56.480507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.950 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.950 [2024-07-15 14:27:56.498667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.950 [2024-07-15 14:27:56.498736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.950 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.950 [2024-07-15 14:27:56.514449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.950 [2024-07-15 14:27:56.514515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.950 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:16.950 [2024-07-15 14:27:56.531920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:16.950 [2024-07-15 14:27:56.531967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:16.950 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.547252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.547293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.557588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.557637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.572227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.572268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.582503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.582539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.597036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.597101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.614058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.614110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.631221] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.631264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.641620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.641655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.657005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.657039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.673574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.673619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.689988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.690037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.699927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.699963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.715145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.715187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.731510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.731572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.749597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.749645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.759575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.759611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.773743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.773779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.209 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.209 [2024-07-15 14:27:56.788902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.209 [2024-07-15 14:27:56.788940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.210 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.210 [2024-07-15 14:27:56.799427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.210 [2024-07-15 14:27:56.799462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.210 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.468 [2024-07-15 14:27:56.814182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.468 [2024-07-15 14:27:56.814217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.468 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.468 [2024-07-15 14:27:56.824846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.468 [2024-07-15 14:27:56.824889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.468 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.468 [2024-07-15 14:27:56.840249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.468 [2024-07-15 14:27:56.840290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.468 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.468 [2024-07-15 14:27:56.850328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.468 [2024-07-15 14:27:56.850367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.468 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.468 [2024-07-15 14:27:56.865327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.468 [2024-07-15 14:27:56.865388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.468 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.468 [2024-07-15 14:27:56.883107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.468 [2024-07-15 14:27:56.883168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.468 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.468 [2024-07-15 14:27:56.899327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.468 [2024-07-15 14:27:56.899366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.468 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.468 [2024-07-15 14:27:56.917061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.468 [2024-07-15 14:27:56.917135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.468 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.468 [2024-07-15 14:27:56.933312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.468 [2024-07-15 14:27:56.933363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.468 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.468 [2024-07-15 14:27:56.950233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.468 [2024-07-15 14:27:56.950281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.469 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.469 [2024-07-15 14:27:56.965933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.469 [2024-07-15 14:27:56.965972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.469 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.469 [2024-07-15 14:27:56.975991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.469 [2024-07-15 14:27:56.976046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.469 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.469 [2024-07-15 14:27:56.990748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.469 [2024-07-15 14:27:56.990811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.469 2024/07/15 14:27:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.469 [2024-07-15 14:27:57.007905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.469 [2024-07-15 14:27:57.007951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.469 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.469 [2024-07-15 14:27:57.023896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.469 [2024-07-15 14:27:57.023978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.469 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.469 [2024-07-15 14:27:57.041051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.469 [2024-07-15 14:27:57.041110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.469 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.469 [2024-07-15 14:27:57.057188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.469 [2024-07-15 14:27:57.057237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.469 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.725 [2024-07-15 14:27:57.074279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.725 [2024-07-15 14:27:57.074344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.725 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.725 [2024-07-15 14:27:57.090583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.725 [2024-07-15 14:27:57.090635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.725 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.725 [2024-07-15 14:27:57.107649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.725 [2024-07-15 14:27:57.107725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.725 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.725 [2024-07-15 14:27:57.123621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.725 [2024-07-15 14:27:57.123681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.725 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.725 [2024-07-15 14:27:57.139613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.726 [2024-07-15 14:27:57.139660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.726 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.726 [2024-07-15 14:27:57.150534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.726 [2024-07-15 14:27:57.150599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.726 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.726 [2024-07-15 14:27:57.164817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.726 [2024-07-15 14:27:57.164889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.726 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.726 [2024-07-15 14:27:57.181276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.726 [2024-07-15 14:27:57.181348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.726 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.726 [2024-07-15 14:27:57.199669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.726 [2024-07-15 14:27:57.199757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.726 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.726 [2024-07-15 14:27:57.213661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.726 [2024-07-15 14:27:57.213725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.726 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.726 [2024-07-15 14:27:57.231690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.726 [2024-07-15 14:27:57.231747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.726 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.726 [2024-07-15 14:27:57.252938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.726 [2024-07-15 14:27:57.252999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.726 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.726 [2024-07-15 14:27:57.268504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.726 [2024-07-15 14:27:57.268548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.726 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.726 [2024-07-15 14:27:57.285427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.726 [2024-07-15 14:27:57.285483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.726 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.726 [2024-07-15 14:27:57.301661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.726 [2024-07-15 14:27:57.301713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.726 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.726 [2024-07-15 14:27:57.318916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.726 [2024-07-15 14:27:57.318967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.335928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.335983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.347341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.347382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.360152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.360194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.377335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.377389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.393253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.393298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.410955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.411009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.426425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.426474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.436849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.436884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.453101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.453145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.469777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.469816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.486300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.486340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.502812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.502849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.522028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.522068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.537693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.537745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.554480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.554519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:17.984 [2024-07-15 14:27:57.570416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.984 [2024-07-15 14:27:57.570463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.984 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.241 [2024-07-15 14:27:57.581820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.241 [2024-07-15 14:27:57.581860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.241 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.241 [2024-07-15 14:27:57.596123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.241 [2024-07-15 14:27:57.596157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.241 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.241 [2024-07-15 14:27:57.607025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.241 [2024-07-15 14:27:57.607064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.618671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.618732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.635621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.635661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.651485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.651533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.661457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.661495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.677014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.677055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.693960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.694000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.710249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.710289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.726441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.726481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.744445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.744484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.759460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.759500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.769273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.769309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.784092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.784129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.799862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.799907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.810424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.810454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.242 [2024-07-15 14:27:57.825653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.242 [2024-07-15 14:27:57.825730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.242 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.500 [2024-07-15 14:27:57.842527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.500 [2024-07-15 14:27:57.842594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.500 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.500 [2024-07-15 14:27:57.858118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.500 [2024-07-15 14:27:57.858162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.500 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.500 [2024-07-15 14:27:57.868325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.500 [2024-07-15 14:27:57.868363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.500 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.500 [2024-07-15 14:27:57.882868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.500 [2024-07-15 14:27:57.882917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.500 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.500 [2024-07-15 14:27:57.899860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.500 [2024-07-15 14:27:57.899902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.500 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.500 [2024-07-15 14:27:57.910201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.500 [2024-07-15 14:27:57.910240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.500 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.500 [2024-07-15 14:27:57.925059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.500 [2024-07-15 14:27:57.925103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.500 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.500 [2024-07-15 14:27:57.942573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.500 [2024-07-15 14:27:57.942618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.500 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.500 [2024-07-15 14:27:57.958097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.500 [2024-07-15 14:27:57.958139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.500 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.500 [2024-07-15 14:27:57.968469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.500 [2024-07-15 14:27:57.968509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.501 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.501 [2024-07-15 14:27:57.983507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.501 [2024-07-15 14:27:57.983560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.501 2024/07/15 14:27:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.501 [2024-07-15 14:27:57.999906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.501 [2024-07-15 14:27:57.999974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.501 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.501 [2024-07-15 14:27:58.016018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.501 [2024-07-15 14:27:58.016080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.501 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.501 [2024-07-15 14:27:58.032841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.501 [2024-07-15 14:27:58.032877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.501 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.501 [2024-07-15 14:27:58.049640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.501 [2024-07-15 14:27:58.049681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.501 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.501 [2024-07-15 14:27:58.060313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.501 [2024-07-15 14:27:58.060353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.501 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.501 [2024-07-15 14:27:58.075588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.501 [2024-07-15 14:27:58.075631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.501 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.501 [2024-07-15 14:27:58.091184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.501 [2024-07-15 14:27:58.091223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.759 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.759 [2024-07-15 14:27:58.108168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.759 [2024-07-15 14:27:58.108206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.759 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.124677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.124731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.142028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.142090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.160019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.160060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.175413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.175454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.191964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.192003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.209298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.209338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.226162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.226200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.241960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.241997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.260002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.260041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.275160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.275200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.285127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.285165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.300245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.300282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.318369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.318407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.333844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.333887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:18.760 [2024-07-15 14:27:58.343622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.760 [2024-07-15 14:27:58.343676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.760 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.358365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.358422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.018 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.375398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.375442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.018 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.391030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.391071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.018 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.400723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.400761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.018 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.416738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.416776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.018 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.431781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.431818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.018 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.448632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.448671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.018 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.465152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.465201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.018 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.483290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.483355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.018 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.498733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.498771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.018 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.516295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.516334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.018 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.531975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.532013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.018 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.550179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.550217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.018 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.018 [2024-07-15 14:27:58.565611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.018 [2024-07-15 14:27:58.565649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.019 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.019 [2024-07-15 14:27:58.581289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.019 [2024-07-15 14:27:58.581329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.019 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.019 [2024-07-15 14:27:58.597106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.019 [2024-07-15 14:27:58.597145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.019 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.019 [2024-07-15 14:27:58.607556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.019 [2024-07-15 14:27:58.607593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.019 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.622784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.622822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.639248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.639289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.655720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.655757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.673276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.673319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.688787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.688828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.698243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.698281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.712939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.712997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.723676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.723725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.737861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.737908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.748473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.748509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.763007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.763053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.772899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.772935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.787367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.787421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.804885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.804926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.819758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.819796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.828842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.828879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.844870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.844915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.855212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.855249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.277 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.277 [2024-07-15 14:27:58.869716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.277 [2024-07-15 14:27:58.869770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.535 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.535 [2024-07-15 14:27:58.886739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.535 [2024-07-15 14:27:58.886781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:58.902085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:58.902126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:58.918770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:58.918828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:58.934412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:58.934460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:58.952965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:58.953019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:58.968359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:58.968398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:58.985341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:58.985384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:59.000935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:59.000986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:59.011301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:59.011343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:59.022723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:59.022781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:59.037649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:59.037707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:59.047499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:59.047536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:59.063157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:59.063196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:59.085979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:59.086041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:59.096825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:59.096865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:59.107726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:59.107784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.536 [2024-07-15 14:27:59.122273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.536 [2024-07-15 14:27:59.122318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.536 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.138436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.138484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.156052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.156112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.171720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.171762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.182262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.182316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.197297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.197344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.208034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.208072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.223578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.223625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.234391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.234445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.249474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.249517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.266660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.266730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.281759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.281798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.297648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.297690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.315068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.315127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.329930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.329971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.346115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.346175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.362265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.362306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:19.794 [2024-07-15 14:27:59.378924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.794 [2024-07-15 14:27:59.378962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.794 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.395543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.395595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.411907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.411951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.427636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.427691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.445891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.445931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.461209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.461250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.473434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.473487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.491440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.491484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.506860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.506913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.524554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.524597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.539460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.539500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.549838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.549874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.564253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.564294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.580518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.580587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.598317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.598367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.613792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.613833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.624537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.624578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.053 [2024-07-15 14:27:59.639615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.053 [2024-07-15 14:27:59.639654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.053 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.655899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.655957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.672523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.672563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.689683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.689735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.704470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.704536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.721514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.721570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.737036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.737089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.747796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.747841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.763570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.763621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.780448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.780512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.796191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.796248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.806689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.806740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.821879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.821926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.837738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.837784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.853102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.853166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.869027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.869072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.879516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.879555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.311 [2024-07-15 14:27:59.894396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.311 [2024-07-15 14:27:59.894459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.311 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:27:59.910943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:27:59.911003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:27:59.927510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:27:59.927552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:27:59.943775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:27:59.943844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:27:59.954261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:27:59.954331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:27:59.969306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:27:59.969371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:27:59.986456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:27:59.986499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:27:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:28:00.002081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:28:00.002131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:28:00.019227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:28:00.019290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:28:00.035343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:28:00.035393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:28:00.052095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:28:00.052159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:28:00.069612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:28:00.069675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:28:00.085090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:28:00.085132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:28:00.095446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:28:00.095510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:28:00.111032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:28:00.111097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:28:00.126473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:28:00.126522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:28:00.143370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:28:00.143439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.570 [2024-07-15 14:28:00.159177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.570 [2024-07-15 14:28:00.159246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.570 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.169574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.169620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.829 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.184686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.184755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.829 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.200920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.200988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.829 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.218020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.218075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.829 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.234436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.234506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.829 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.251602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.251673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.829 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.267273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.267319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.829 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.284134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.284186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.829 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.299982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.300032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.829 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.315795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.315846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.829 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.333626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.333684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.829 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.348998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.349045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.829 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.364932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.364974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.829 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.829 [2024-07-15 14:28:00.381992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.829 [2024-07-15 14:28:00.382036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.830 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.830 [2024-07-15 14:28:00.392087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.830 [2024-07-15 14:28:00.392125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.830 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.830 [2024-07-15 14:28:00.406660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.830 [2024-07-15 14:28:00.406711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.830 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:20.830 [2024-07-15 14:28:00.417394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.830 [2024-07-15 14:28:00.417446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.830 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.088 [2024-07-15 14:28:00.432776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.088 [2024-07-15 14:28:00.432818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.088 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.088 [2024-07-15 14:28:00.449487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.088 [2024-07-15 14:28:00.449530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.088 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.088 [2024-07-15 14:28:00.465251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.088 [2024-07-15 14:28:00.465293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.088 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.088 [2024-07-15 14:28:00.475643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.475682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.490797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.490836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.509894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.509942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.523113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.523151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 00:11:21.089 Latency(us) 00:11:21.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.089 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:21.089 Nvme1n1 : 5.01 11444.62 89.41 0.00 0.00 11168.31 4766.25 23473.80 00:11:21.089 =================================================================================================================== 00:11:21.089 Total : 11444.62 89.41 0.00 0.00 11168.31 4766.25 23473.80 00:11:21.089 [2024-07-15 14:28:00.528824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.528858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.540838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.540883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.552852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.552900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.564858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.564905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.576858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.576904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.588860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.588904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.600850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.600888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.612841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.612887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.620837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.620874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.632859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.632898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.644846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.644882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.656890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.656941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.668860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.668895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.089 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.089 [2024-07-15 14:28:00.680846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.089 [2024-07-15 14:28:00.680876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.409 2024/07/15 14:28:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:21.409 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76210) - No such process 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76210 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 delay0 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.409 14:28:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:21.409 [2024-07-15 14:28:00.901138] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:27.972 Initializing NVMe Controllers 00:11:27.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:27.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:27.972 Initialization complete. Launching workers. 00:11:27.972 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 609 00:11:27.972 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 896, failed to submit 33 00:11:27.972 success 715, unsuccess 181, failed 0 00:11:27.972 14:28:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:27.972 14:28:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:27.972 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.972 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:27.972 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:27.972 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:27.972 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.972 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:27.973 rmmod nvme_tcp 00:11:27.973 rmmod nvme_fabrics 00:11:27.973 rmmod nvme_keyring 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 76039 ']' 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 76039 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 76039 ']' 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 76039 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76039 00:11:27.973 killing process with pid 76039 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76039' 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 76039 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 76039 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:27.973 00:11:27.973 real 0m24.470s 00:11:27.973 user 0m39.815s 00:11:27.973 sys 0m6.398s 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.973 14:28:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:27.973 ************************************ 00:11:27.973 END TEST nvmf_zcopy 00:11:27.973 ************************************ 00:11:27.973 14:28:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:27.973 14:28:07 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:27.973 14:28:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:27.973 14:28:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.973 14:28:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.973 ************************************ 00:11:27.973 START TEST nvmf_nmic 00:11:27.973 ************************************ 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:27.973 * Looking for test storage... 00:11:27.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:27.973 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:27.974 Cannot find device "nvmf_tgt_br" 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.974 Cannot find device "nvmf_tgt_br2" 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:27.974 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:28.232 Cannot find device "nvmf_tgt_br" 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:28.232 Cannot find device "nvmf_tgt_br2" 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:28.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:28.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:28.232 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:28.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:11:28.492 00:11:28.492 --- 10.0.0.2 ping statistics --- 00:11:28.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.492 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:28.492 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:28.492 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:11:28.492 00:11:28.492 --- 10.0.0.3 ping statistics --- 00:11:28.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.492 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:28.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:28.492 00:11:28.492 --- 10.0.0.1 ping statistics --- 00:11:28.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.492 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76531 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76531 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76531 ']' 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.492 14:28:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:28.492 [2024-07-15 14:28:07.920014] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:11:28.492 [2024-07-15 14:28:07.920123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.492 [2024-07-15 14:28:08.056190] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.750 [2024-07-15 14:28:08.117821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.750 [2024-07-15 14:28:08.117875] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.750 [2024-07-15 14:28:08.117887] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.750 [2024-07-15 14:28:08.117896] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.750 [2024-07-15 14:28:08.117903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.750 [2024-07-15 14:28:08.118031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.750 [2024-07-15 14:28:08.118718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.750 [2024-07-15 14:28:08.118804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.750 [2024-07-15 14:28:08.118812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.701 14:28:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:29.701 14:28:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:11:29.701 14:28:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:29.701 14:28:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:29.701 14:28:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.701 14:28:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.701 14:28:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.701 14:28:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.701 14:28:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.701 [2024-07-15 14:28:08.995985] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.701 Malloc0 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.701 [2024-07-15 14:28:09.068916] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.701 test case1: single bdev can't be used in multiple subsystems 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:29.701 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.702 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.702 [2024-07-15 14:28:09.100761] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:29.702 [2024-07-15 14:28:09.100824] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:29.702 [2024-07-15 14:28:09.100838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.702 2024/07/15 14:28:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:29.702 request: 00:11:29.702 { 00:11:29.702 "method": "nvmf_subsystem_add_ns", 00:11:29.702 "params": { 00:11:29.702 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:29.702 "namespace": { 00:11:29.702 "bdev_name": "Malloc0", 00:11:29.702 "no_auto_visible": false 00:11:29.702 } 00:11:29.702 } 00:11:29.702 } 00:11:29.702 Got JSON-RPC error response 00:11:29.702 GoRPCClient: error on JSON-RPC call 00:11:29.702 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:29.702 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:29.702 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:29.702 Adding namespace failed - expected result. 00:11:29.702 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:29.702 test case2: host connect to nvmf target in multiple paths 00:11:29.702 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:29.702 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:29.702 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.702 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.702 [2024-07-15 14:28:09.116898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:29.702 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.702 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.702 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:29.960 14:28:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:29.960 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:29.960 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.960 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:29.960 14:28:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:32.491 14:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:32.491 14:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:32.491 14:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.491 14:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:32.491 14:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.491 14:28:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:32.491 14:28:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:32.491 [global] 00:11:32.491 thread=1 00:11:32.491 invalidate=1 00:11:32.491 rw=write 00:11:32.491 time_based=1 00:11:32.491 runtime=1 00:11:32.491 ioengine=libaio 00:11:32.491 direct=1 00:11:32.491 bs=4096 00:11:32.491 iodepth=1 00:11:32.491 norandommap=0 00:11:32.491 numjobs=1 00:11:32.491 00:11:32.491 verify_dump=1 00:11:32.491 verify_backlog=512 00:11:32.491 verify_state_save=0 00:11:32.491 do_verify=1 00:11:32.491 verify=crc32c-intel 00:11:32.491 [job0] 00:11:32.491 filename=/dev/nvme0n1 00:11:32.491 Could not set queue depth (nvme0n1) 00:11:32.491 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:32.491 fio-3.35 00:11:32.491 Starting 1 thread 00:11:33.426 00:11:33.426 job0: (groupid=0, jobs=1): err= 0: pid=76641: Mon Jul 15 14:28:12 2024 00:11:33.426 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:33.426 slat (nsec): min=14323, max=51753, avg=18194.45, stdev=5162.41 00:11:33.426 clat (usec): min=130, max=560, avg=158.34, stdev=27.41 00:11:33.426 lat (usec): min=146, max=609, avg=176.53, stdev=27.71 00:11:33.426 clat percentiles (usec): 00:11:33.426 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:11:33.426 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:11:33.426 | 70.00th=[ 157], 80.00th=[ 174], 90.00th=[ 202], 95.00th=[ 212], 00:11:33.426 | 99.00th=[ 233], 99.50th=[ 243], 99.90th=[ 359], 99.95th=[ 424], 00:11:33.426 | 99.99th=[ 562] 00:11:33.426 write: IOPS=3278, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1001msec); 0 zone resets 00:11:33.426 slat (usec): min=18, max=128, avg=25.96, stdev= 6.91 00:11:33.426 clat (usec): min=85, max=652, avg=109.42, stdev=23.00 00:11:33.426 lat (usec): min=115, max=674, avg=135.39, stdev=24.57 00:11:33.426 clat percentiles (usec): 00:11:33.426 | 1.00th=[ 94], 5.00th=[ 96], 10.00th=[ 97], 20.00th=[ 99], 00:11:33.426 | 30.00th=[ 100], 40.00th=[ 102], 50.00th=[ 103], 60.00th=[ 106], 00:11:33.426 | 70.00th=[ 110], 80.00th=[ 115], 90.00th=[ 129], 95.00th=[ 143], 00:11:33.426 | 99.00th=[ 165], 99.50th=[ 178], 99.90th=[ 420], 99.95th=[ 453], 00:11:33.426 | 99.99th=[ 652] 00:11:33.426 bw ( KiB/s): min=13376, max=13376, per=100.00%, avg=13376.00, stdev= 0.00, samples=1 00:11:33.426 iops : min= 3344, max= 3344, avg=3344.00, stdev= 0.00, samples=1 00:11:33.426 lat (usec) : 100=14.59%, 250=85.03%, 500=0.35%, 750=0.03% 00:11:33.426 cpu : usr=2.60%, sys=10.70%, ctx=6354, majf=0, minf=2 00:11:33.426 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:33.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:33.426 issued rwts: total=3072,3282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:33.426 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:33.426 00:11:33.426 Run status group 0 (all jobs): 00:11:33.426 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:33.426 WRITE: bw=12.8MiB/s (13.4MB/s), 12.8MiB/s-12.8MiB/s (13.4MB/s-13.4MB/s), io=12.8MiB (13.4MB), run=1001-1001msec 00:11:33.426 00:11:33.426 Disk stats (read/write): 00:11:33.426 nvme0n1: ios=2782/3072, merge=0/0, ticks=453/371, in_queue=824, util=90.86% 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:33.426 rmmod nvme_tcp 00:11:33.426 rmmod nvme_fabrics 00:11:33.426 rmmod nvme_keyring 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76531 ']' 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76531 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76531 ']' 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76531 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76531 00:11:33.426 killing process with pid 76531 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76531' 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76531 00:11:33.426 14:28:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76531 00:11:33.690 14:28:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:33.690 14:28:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:33.690 14:28:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:33.690 14:28:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:33.690 14:28:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:33.690 14:28:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.690 14:28:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.690 14:28:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.690 14:28:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:33.690 00:11:33.690 real 0m5.764s 00:11:33.690 user 0m19.687s 00:11:33.690 sys 0m1.303s 00:11:33.690 14:28:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.690 14:28:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:33.690 ************************************ 00:11:33.690 END TEST nvmf_nmic 00:11:33.690 ************************************ 00:11:33.690 14:28:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:33.690 14:28:13 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:33.690 14:28:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:33.690 14:28:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.690 14:28:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:33.690 ************************************ 00:11:33.690 START TEST nvmf_fio_target 00:11:33.690 ************************************ 00:11:33.690 14:28:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:33.956 * Looking for test storage... 00:11:33.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:11:33.956 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:33.957 Cannot find device "nvmf_tgt_br" 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:33.957 Cannot find device "nvmf_tgt_br2" 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:33.957 Cannot find device "nvmf_tgt_br" 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:33.957 Cannot find device "nvmf_tgt_br2" 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:33.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:33.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:33.957 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:34.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:11:34.214 00:11:34.214 --- 10.0.0.2 ping statistics --- 00:11:34.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.214 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:34.214 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:34.214 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:11:34.214 00:11:34.214 --- 10.0.0.3 ping statistics --- 00:11:34.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.214 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:34.214 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:34.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:11:34.214 00:11:34.214 --- 10.0.0.1 ping statistics --- 00:11:34.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.215 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=76819 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 76819 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 76819 ']' 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.215 14:28:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.215 [2024-07-15 14:28:13.763476] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:11:34.215 [2024-07-15 14:28:13.763571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.472 [2024-07-15 14:28:13.899281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.472 [2024-07-15 14:28:13.966854] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.472 [2024-07-15 14:28:13.966913] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.472 [2024-07-15 14:28:13.966926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.472 [2024-07-15 14:28:13.966936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.472 [2024-07-15 14:28:13.966945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.472 [2024-07-15 14:28:13.967318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.472 [2024-07-15 14:28:13.967453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.472 [2024-07-15 14:28:13.967819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.472 [2024-07-15 14:28:13.967824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.472 14:28:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.472 14:28:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:11:34.472 14:28:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.472 14:28:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:34.472 14:28:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.730 14:28:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.730 14:28:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:34.988 [2024-07-15 14:28:14.344305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.988 14:28:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:35.246 14:28:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:35.246 14:28:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:35.503 14:28:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:35.503 14:28:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:35.759 14:28:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:35.760 14:28:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:36.015 14:28:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:36.015 14:28:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:36.272 14:28:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:36.529 14:28:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:36.529 14:28:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:36.785 14:28:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:36.785 14:28:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:37.042 14:28:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:37.042 14:28:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:37.299 14:28:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.556 14:28:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:37.556 14:28:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:37.813 14:28:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:37.813 14:28:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:38.071 14:28:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.328 [2024-07-15 14:28:17.855475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.328 14:28:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:38.592 14:28:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:38.850 14:28:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.107 14:28:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:39.107 14:28:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:39.107 14:28:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.107 14:28:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:39.107 14:28:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:39.107 14:28:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:41.013 14:28:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:41.013 14:28:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.013 14:28:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:41.270 14:28:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:41.270 14:28:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.270 14:28:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:41.270 14:28:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:41.270 [global] 00:11:41.270 thread=1 00:11:41.270 invalidate=1 00:11:41.270 rw=write 00:11:41.270 time_based=1 00:11:41.270 runtime=1 00:11:41.270 ioengine=libaio 00:11:41.270 direct=1 00:11:41.270 bs=4096 00:11:41.270 iodepth=1 00:11:41.270 norandommap=0 00:11:41.270 numjobs=1 00:11:41.270 00:11:41.270 verify_dump=1 00:11:41.270 verify_backlog=512 00:11:41.270 verify_state_save=0 00:11:41.270 do_verify=1 00:11:41.270 verify=crc32c-intel 00:11:41.270 [job0] 00:11:41.270 filename=/dev/nvme0n1 00:11:41.270 [job1] 00:11:41.270 filename=/dev/nvme0n2 00:11:41.270 [job2] 00:11:41.270 filename=/dev/nvme0n3 00:11:41.270 [job3] 00:11:41.270 filename=/dev/nvme0n4 00:11:41.270 Could not set queue depth (nvme0n1) 00:11:41.270 Could not set queue depth (nvme0n2) 00:11:41.270 Could not set queue depth (nvme0n3) 00:11:41.270 Could not set queue depth (nvme0n4) 00:11:41.270 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.270 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.270 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.270 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:41.270 fio-3.35 00:11:41.270 Starting 4 threads 00:11:42.641 00:11:42.641 job0: (groupid=0, jobs=1): err= 0: pid=77098: Mon Jul 15 14:28:21 2024 00:11:42.641 read: IOPS=1615, BW=6463KiB/s (6618kB/s)(6476KiB/1002msec) 00:11:42.641 slat (nsec): min=9179, max=96318, avg=16242.81, stdev=5298.47 00:11:42.641 clat (usec): min=163, max=42150, avg=305.15, stdev=1041.43 00:11:42.641 lat (usec): min=215, max=42161, avg=321.39, stdev=1041.31 00:11:42.641 clat percentiles (usec): 00:11:42.642 | 1.00th=[ 217], 5.00th=[ 233], 10.00th=[ 245], 20.00th=[ 253], 00:11:42.642 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:11:42.642 | 70.00th=[ 285], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 347], 00:11:42.642 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 1090], 99.95th=[42206], 00:11:42.642 | 99.99th=[42206] 00:11:42.642 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:11:42.642 slat (usec): min=12, max=118, avg=29.12, stdev=12.46 00:11:42.642 clat (usec): min=104, max=7529, avg=201.86, stdev=191.47 00:11:42.642 lat (usec): min=128, max=7568, avg=230.97, stdev=192.13 00:11:42.642 clat percentiles (usec): 00:11:42.642 | 1.00th=[ 115], 5.00th=[ 123], 10.00th=[ 129], 20.00th=[ 153], 00:11:42.642 | 30.00th=[ 180], 40.00th=[ 192], 50.00th=[ 200], 60.00th=[ 206], 00:11:42.642 | 70.00th=[ 215], 80.00th=[ 229], 90.00th=[ 243], 95.00th=[ 253], 00:11:42.642 | 99.00th=[ 273], 99.50th=[ 297], 99.90th=[ 2507], 99.95th=[ 2540], 00:11:42.642 | 99.99th=[ 7504] 00:11:42.642 bw ( KiB/s): min= 8192, max= 8192, per=21.09%, avg=8192.00, stdev= 0.00, samples=1 00:11:42.642 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:42.642 lat (usec) : 250=59.09%, 500=40.63%, 750=0.03%, 1000=0.03% 00:11:42.642 lat (msec) : 2=0.08%, 4=0.08%, 10=0.03%, 50=0.03% 00:11:42.642 cpu : usr=1.50%, sys=6.79%, ctx=3683, majf=0, minf=9 00:11:42.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:42.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.642 issued rwts: total=1619,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:42.642 job1: (groupid=0, jobs=1): err= 0: pid=77099: Mon Jul 15 14:28:21 2024 00:11:42.642 read: IOPS=1792, BW=7169KiB/s (7341kB/s)(7176KiB/1001msec) 00:11:42.642 slat (nsec): min=11615, max=79871, avg=17531.32, stdev=6022.95 00:11:42.642 clat (usec): min=153, max=42085, avg=305.20, stdev=988.11 00:11:42.642 lat (usec): min=168, max=42101, avg=322.73, stdev=988.15 00:11:42.642 clat percentiles (usec): 00:11:42.642 | 1.00th=[ 212], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 251], 00:11:42.642 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:11:42.642 | 70.00th=[ 285], 80.00th=[ 326], 90.00th=[ 351], 95.00th=[ 371], 00:11:42.642 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 1029], 99.95th=[42206], 00:11:42.642 | 99.99th=[42206] 00:11:42.642 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:42.642 slat (usec): min=12, max=120, avg=23.87, stdev= 8.20 00:11:42.642 clat (usec): min=102, max=459, avg=178.27, stdev=45.56 00:11:42.642 lat (usec): min=125, max=480, avg=202.14, stdev=45.93 00:11:42.642 clat percentiles (usec): 00:11:42.642 | 1.00th=[ 109], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 128], 00:11:42.642 | 30.00th=[ 137], 40.00th=[ 161], 50.00th=[ 190], 60.00th=[ 200], 00:11:42.642 | 70.00th=[ 208], 80.00th=[ 221], 90.00th=[ 237], 95.00th=[ 247], 00:11:42.642 | 99.00th=[ 269], 99.50th=[ 269], 99.90th=[ 293], 99.95th=[ 306], 00:11:42.642 | 99.99th=[ 461] 00:11:42.642 bw ( KiB/s): min= 8192, max= 8192, per=21.09%, avg=8192.00, stdev= 0.00, samples=1 00:11:42.642 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:42.642 lat (usec) : 250=60.83%, 500=39.12% 00:11:42.642 lat (msec) : 2=0.03%, 50=0.03% 00:11:42.642 cpu : usr=1.90%, sys=6.10%, ctx=3851, majf=0, minf=12 00:11:42.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:42.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.642 issued rwts: total=1794,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:42.642 job2: (groupid=0, jobs=1): err= 0: pid=77100: Mon Jul 15 14:28:21 2024 00:11:42.642 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:42.642 slat (usec): min=14, max=112, avg=20.64, stdev= 6.59 00:11:42.642 clat (usec): min=150, max=309, avg=176.36, stdev=17.52 00:11:42.642 lat (usec): min=166, max=332, avg=197.00, stdev=20.50 00:11:42.642 clat percentiles (usec): 00:11:42.642 | 1.00th=[ 157], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:11:42.642 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:11:42.642 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 208], 00:11:42.642 | 99.00th=[ 253], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 293], 00:11:42.642 | 99.99th=[ 310] 00:11:42.642 write: IOPS=2675, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:11:42.642 slat (usec): min=21, max=115, avg=30.82, stdev= 9.80 00:11:42.642 clat (usec): min=107, max=440, avg=149.67, stdev=34.05 00:11:42.642 lat (usec): min=132, max=502, avg=180.48, stdev=38.20 00:11:42.642 clat percentiles (usec): 00:11:42.642 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 127], 00:11:42.642 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:11:42.642 | 70.00th=[ 151], 80.00th=[ 169], 90.00th=[ 208], 95.00th=[ 223], 00:11:42.642 | 99.00th=[ 258], 99.50th=[ 281], 99.90th=[ 388], 99.95th=[ 429], 00:11:42.642 | 99.99th=[ 441] 00:11:42.642 bw ( KiB/s): min=11752, max=11752, per=30.26%, avg=11752.00, stdev= 0.00, samples=1 00:11:42.642 iops : min= 2938, max= 2938, avg=2938.00, stdev= 0.00, samples=1 00:11:42.642 lat (usec) : 250=98.74%, 500=1.26% 00:11:42.642 cpu : usr=2.50%, sys=10.10%, ctx=5238, majf=0, minf=5 00:11:42.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:42.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.642 issued rwts: total=2560,2678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:42.642 job3: (groupid=0, jobs=1): err= 0: pid=77101: Mon Jul 15 14:28:21 2024 00:11:42.642 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:42.642 slat (nsec): min=13894, max=45945, avg=17140.20, stdev=3792.16 00:11:42.642 clat (usec): min=150, max=505, avg=178.56, stdev=21.93 00:11:42.642 lat (usec): min=166, max=528, avg=195.70, stdev=23.96 00:11:42.642 clat percentiles (usec): 00:11:42.642 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:11:42.642 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:11:42.642 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 206], 95.00th=[ 227], 00:11:42.642 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 318], 99.95th=[ 367], 00:11:42.642 | 99.99th=[ 506] 00:11:42.642 write: IOPS=2951, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec); 0 zone resets 00:11:42.642 slat (usec): min=20, max=115, avg=26.44, stdev= 7.21 00:11:42.642 clat (usec): min=111, max=2648, avg=138.80, stdev=61.29 00:11:42.642 lat (usec): min=133, max=2683, avg=165.24, stdev=62.50 00:11:42.642 clat percentiles (usec): 00:11:42.642 | 1.00th=[ 118], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 126], 00:11:42.642 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:11:42.642 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 167], 00:11:42.642 | 99.00th=[ 204], 99.50th=[ 243], 99.90th=[ 619], 99.95th=[ 1991], 00:11:42.642 | 99.99th=[ 2638] 00:11:42.642 bw ( KiB/s): min=12288, max=12288, per=31.64%, avg=12288.00, stdev= 0.00, samples=1 00:11:42.642 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:42.642 lat (usec) : 250=99.13%, 500=0.78%, 750=0.05% 00:11:42.642 lat (msec) : 2=0.02%, 4=0.02% 00:11:42.642 cpu : usr=1.80%, sys=9.50%, ctx=5515, majf=0, minf=9 00:11:42.642 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:42.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:42.642 issued rwts: total=2560,2954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:42.642 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:42.642 00:11:42.642 Run status group 0 (all jobs): 00:11:42.642 READ: bw=33.3MiB/s (34.9MB/s), 6463KiB/s-9.99MiB/s (6618kB/s-10.5MB/s), io=33.3MiB (35.0MB), run=1001-1002msec 00:11:42.642 WRITE: bw=37.9MiB/s (39.8MB/s), 8176KiB/s-11.5MiB/s (8372kB/s-12.1MB/s), io=38.0MiB (39.8MB), run=1001-1002msec 00:11:42.642 00:11:42.642 Disk stats (read/write): 00:11:42.642 nvme0n1: ios=1536/1536, merge=0/0, ticks=527/307, in_queue=834, util=90.35% 00:11:42.642 nvme0n2: ios=1536/1799, merge=0/0, ticks=461/313, in_queue=774, util=86.82% 00:11:42.642 nvme0n3: ios=2048/2335, merge=0/0, ticks=376/386, in_queue=762, util=88.82% 00:11:42.642 nvme0n4: ios=2127/2560, merge=0/0, ticks=434/383, in_queue=817, util=90.82% 00:11:42.642 14:28:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:42.642 [global] 00:11:42.642 thread=1 00:11:42.642 invalidate=1 00:11:42.642 rw=randwrite 00:11:42.642 time_based=1 00:11:42.642 runtime=1 00:11:42.642 ioengine=libaio 00:11:42.642 direct=1 00:11:42.642 bs=4096 00:11:42.642 iodepth=1 00:11:42.642 norandommap=0 00:11:42.642 numjobs=1 00:11:42.642 00:11:42.642 verify_dump=1 00:11:42.642 verify_backlog=512 00:11:42.642 verify_state_save=0 00:11:42.642 do_verify=1 00:11:42.642 verify=crc32c-intel 00:11:42.642 [job0] 00:11:42.642 filename=/dev/nvme0n1 00:11:42.642 [job1] 00:11:42.642 filename=/dev/nvme0n2 00:11:42.642 [job2] 00:11:42.642 filename=/dev/nvme0n3 00:11:42.642 [job3] 00:11:42.642 filename=/dev/nvme0n4 00:11:42.642 Could not set queue depth (nvme0n1) 00:11:42.642 Could not set queue depth (nvme0n2) 00:11:42.642 Could not set queue depth (nvme0n3) 00:11:42.642 Could not set queue depth (nvme0n4) 00:11:42.642 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.642 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.642 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.642 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.642 fio-3.35 00:11:42.642 Starting 4 threads 00:11:44.017 00:11:44.017 job0: (groupid=0, jobs=1): err= 0: pid=77158: Mon Jul 15 14:28:23 2024 00:11:44.017 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:44.017 slat (usec): min=13, max=126, avg=20.43, stdev= 7.00 00:11:44.017 clat (usec): min=139, max=527, avg=168.04, stdev=20.19 00:11:44.017 lat (usec): min=155, max=549, avg=188.46, stdev=21.55 00:11:44.017 clat percentiles (usec): 00:11:44.017 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 155], 00:11:44.017 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:11:44.017 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 208], 00:11:44.017 | 99.00th=[ 235], 99.50th=[ 243], 99.90th=[ 285], 99.95th=[ 424], 00:11:44.017 | 99.99th=[ 529] 00:11:44.017 write: IOPS=3063, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:44.017 slat (nsec): min=20264, max=81018, avg=31426.32, stdev=8820.81 00:11:44.017 clat (usec): min=102, max=439, avg=132.98, stdev=21.03 00:11:44.017 lat (usec): min=127, max=482, avg=164.41, stdev=21.72 00:11:44.017 clat percentiles (usec): 00:11:44.017 | 1.00th=[ 109], 5.00th=[ 113], 10.00th=[ 115], 20.00th=[ 118], 00:11:44.017 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 131], 00:11:44.017 | 70.00th=[ 137], 80.00th=[ 147], 90.00th=[ 163], 95.00th=[ 176], 00:11:44.017 | 99.00th=[ 196], 99.50th=[ 206], 99.90th=[ 247], 99.95th=[ 285], 00:11:44.017 | 99.99th=[ 441] 00:11:44.017 bw ( KiB/s): min=12288, max=12288, per=30.64%, avg=12288.00, stdev= 0.00, samples=1 00:11:44.017 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:44.017 lat (usec) : 250=99.80%, 500=0.18%, 750=0.02% 00:11:44.017 cpu : usr=2.50%, sys=11.30%, ctx=5627, majf=0, minf=15 00:11:44.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.017 issued rwts: total=2560,3067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.017 job1: (groupid=0, jobs=1): err= 0: pid=77159: Mon Jul 15 14:28:23 2024 00:11:44.017 read: IOPS=1626, BW=6505KiB/s (6662kB/s)(6512KiB/1001msec) 00:11:44.017 slat (nsec): min=12189, max=45728, avg=15450.45, stdev=3088.28 00:11:44.017 clat (usec): min=179, max=429, avg=283.89, stdev=17.12 00:11:44.017 lat (usec): min=192, max=445, avg=299.34, stdev=17.38 00:11:44.017 clat percentiles (usec): 00:11:44.017 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:11:44.017 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 281], 60.00th=[ 285], 00:11:44.017 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:11:44.017 | 99.00th=[ 334], 99.50th=[ 383], 99.90th=[ 412], 99.95th=[ 429], 00:11:44.017 | 99.99th=[ 429] 00:11:44.017 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:44.017 slat (usec): min=17, max=122, avg=24.59, stdev= 6.84 00:11:44.017 clat (usec): min=106, max=617, avg=222.39, stdev=18.74 00:11:44.017 lat (usec): min=133, max=640, avg=246.98, stdev=19.97 00:11:44.017 clat percentiles (usec): 00:11:44.017 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 210], 00:11:44.017 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:11:44.017 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 249], 00:11:44.017 | 99.00th=[ 273], 99.50th=[ 297], 99.90th=[ 314], 99.95th=[ 330], 00:11:44.017 | 99.99th=[ 619] 00:11:44.017 bw ( KiB/s): min= 8192, max= 8192, per=20.42%, avg=8192.00, stdev= 0.00, samples=1 00:11:44.017 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:44.017 lat (usec) : 250=53.59%, 500=46.38%, 750=0.03% 00:11:44.017 cpu : usr=1.70%, sys=5.50%, ctx=3679, majf=0, minf=7 00:11:44.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.017 issued rwts: total=1628,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.017 job2: (groupid=0, jobs=1): err= 0: pid=77160: Mon Jul 15 14:28:23 2024 00:11:44.017 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:44.017 slat (nsec): min=12989, max=69933, avg=15418.73, stdev=3748.51 00:11:44.017 clat (usec): min=151, max=2743, avg=186.82, stdev=69.89 00:11:44.017 lat (usec): min=165, max=2770, avg=202.24, stdev=70.52 00:11:44.017 clat percentiles (usec): 00:11:44.017 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:11:44.017 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:11:44.017 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 223], 95.00th=[ 245], 00:11:44.017 | 99.00th=[ 347], 99.50th=[ 375], 99.90th=[ 644], 99.95th=[ 1909], 00:11:44.017 | 99.99th=[ 2737] 00:11:44.017 write: IOPS=2871, BW=11.2MiB/s (11.8MB/s)(11.2MiB/1001msec); 0 zone resets 00:11:44.017 slat (nsec): min=19756, max=73237, avg=24146.37, stdev=6345.96 00:11:44.017 clat (usec): min=110, max=763, avg=140.42, stdev=23.84 00:11:44.017 lat (usec): min=131, max=800, avg=164.56, stdev=25.17 00:11:44.017 clat percentiles (usec): 00:11:44.017 | 1.00th=[ 115], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 124], 00:11:44.017 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:11:44.017 | 70.00th=[ 145], 80.00th=[ 159], 90.00th=[ 174], 95.00th=[ 184], 00:11:44.017 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 221], 99.95th=[ 233], 00:11:44.017 | 99.99th=[ 766] 00:11:44.017 bw ( KiB/s): min=12288, max=12288, per=30.64%, avg=12288.00, stdev= 0.00, samples=1 00:11:44.017 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:44.017 lat (usec) : 250=97.87%, 500=2.01%, 750=0.07%, 1000=0.02% 00:11:44.017 lat (msec) : 2=0.02%, 4=0.02% 00:11:44.017 cpu : usr=1.90%, sys=8.30%, ctx=5441, majf=0, minf=10 00:11:44.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.017 issued rwts: total=2560,2874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.017 job3: (groupid=0, jobs=1): err= 0: pid=77161: Mon Jul 15 14:28:23 2024 00:11:44.017 read: IOPS=1626, BW=6505KiB/s (6662kB/s)(6512KiB/1001msec) 00:11:44.017 slat (nsec): min=12416, max=54746, avg=15205.57, stdev=3556.63 00:11:44.017 clat (usec): min=185, max=418, avg=284.17, stdev=15.65 00:11:44.017 lat (usec): min=198, max=431, avg=299.37, stdev=15.75 00:11:44.017 clat percentiles (usec): 00:11:44.017 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:11:44.017 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 281], 60.00th=[ 285], 00:11:44.017 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:11:44.017 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 404], 99.95th=[ 420], 00:11:44.017 | 99.99th=[ 420] 00:11:44.017 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:44.017 slat (usec): min=16, max=156, avg=24.86, stdev= 7.02 00:11:44.017 clat (usec): min=132, max=547, avg=222.01, stdev=16.59 00:11:44.017 lat (usec): min=157, max=570, avg=246.87, stdev=17.94 00:11:44.017 clat percentiles (usec): 00:11:44.017 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:11:44.017 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:11:44.017 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 247], 00:11:44.017 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 302], 99.95th=[ 310], 00:11:44.017 | 99.99th=[ 545] 00:11:44.017 bw ( KiB/s): min= 8192, max= 8192, per=20.42%, avg=8192.00, stdev= 0.00, samples=1 00:11:44.017 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:44.017 lat (usec) : 250=54.00%, 500=45.97%, 750=0.03% 00:11:44.017 cpu : usr=1.50%, sys=5.70%, ctx=3678, majf=0, minf=13 00:11:44.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.017 issued rwts: total=1628,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.017 00:11:44.017 Run status group 0 (all jobs): 00:11:44.017 READ: bw=32.7MiB/s (34.3MB/s), 6505KiB/s-9.99MiB/s (6662kB/s-10.5MB/s), io=32.7MiB (34.3MB), run=1001-1001msec 00:11:44.017 WRITE: bw=39.2MiB/s (41.1MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.5MB/s), io=39.2MiB (41.1MB), run=1001-1001msec 00:11:44.017 00:11:44.017 Disk stats (read/write): 00:11:44.017 nvme0n1: ios=2405/2560, merge=0/0, ticks=435/388, in_queue=823, util=89.88% 00:11:44.017 nvme0n2: ios=1584/1640, merge=0/0, ticks=471/378, in_queue=849, util=89.91% 00:11:44.017 nvme0n3: ios=2219/2560, merge=0/0, ticks=437/383, in_queue=820, util=89.46% 00:11:44.017 nvme0n4: ios=1542/1639, merge=0/0, ticks=445/384, in_queue=829, util=90.02% 00:11:44.018 14:28:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:44.018 [global] 00:11:44.018 thread=1 00:11:44.018 invalidate=1 00:11:44.018 rw=write 00:11:44.018 time_based=1 00:11:44.018 runtime=1 00:11:44.018 ioengine=libaio 00:11:44.018 direct=1 00:11:44.018 bs=4096 00:11:44.018 iodepth=128 00:11:44.018 norandommap=0 00:11:44.018 numjobs=1 00:11:44.018 00:11:44.018 verify_dump=1 00:11:44.018 verify_backlog=512 00:11:44.018 verify_state_save=0 00:11:44.018 do_verify=1 00:11:44.018 verify=crc32c-intel 00:11:44.018 [job0] 00:11:44.018 filename=/dev/nvme0n1 00:11:44.018 [job1] 00:11:44.018 filename=/dev/nvme0n2 00:11:44.018 [job2] 00:11:44.018 filename=/dev/nvme0n3 00:11:44.018 [job3] 00:11:44.018 filename=/dev/nvme0n4 00:11:44.018 Could not set queue depth (nvme0n1) 00:11:44.018 Could not set queue depth (nvme0n2) 00:11:44.018 Could not set queue depth (nvme0n3) 00:11:44.018 Could not set queue depth (nvme0n4) 00:11:44.018 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:44.018 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:44.018 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:44.018 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:44.018 fio-3.35 00:11:44.018 Starting 4 threads 00:11:45.391 00:11:45.391 job0: (groupid=0, jobs=1): err= 0: pid=77221: Mon Jul 15 14:28:24 2024 00:11:45.391 read: IOPS=5527, BW=21.6MiB/s (22.6MB/s)(21.7MiB/1003msec) 00:11:45.391 slat (usec): min=4, max=3313, avg=88.30, stdev=400.40 00:11:45.391 clat (usec): min=326, max=16546, avg=11502.69, stdev=1432.19 00:11:45.391 lat (usec): min=2589, max=19621, avg=11590.98, stdev=1395.97 00:11:45.391 clat percentiles (usec): 00:11:45.391 | 1.00th=[ 5932], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10945], 00:11:45.391 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:11:45.391 | 70.00th=[11731], 80.00th=[11994], 90.00th=[13566], 95.00th=[14222], 00:11:45.391 | 99.00th=[15008], 99.50th=[15401], 99.90th=[16057], 99.95th=[16188], 00:11:45.391 | 99.99th=[16581] 00:11:45.391 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:11:45.391 slat (usec): min=11, max=3926, avg=83.04, stdev=327.30 00:11:45.391 clat (usec): min=8659, max=16612, avg=11171.31, stdev=1457.17 00:11:45.391 lat (usec): min=8687, max=16688, avg=11254.35, stdev=1460.12 00:11:45.391 clat percentiles (usec): 00:11:45.391 | 1.00th=[ 9110], 5.00th=[ 9372], 10.00th=[ 9503], 20.00th=[ 9765], 00:11:45.391 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11207], 60.00th=[11469], 00:11:45.391 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12649], 95.00th=[14091], 00:11:45.391 | 99.00th=[15926], 99.50th=[16188], 99.90th=[16450], 99.95th=[16581], 00:11:45.391 | 99.99th=[16581] 00:11:45.391 bw ( KiB/s): min=20632, max=24424, per=33.63%, avg=22528.00, stdev=2681.35, samples=2 00:11:45.391 iops : min= 5158, max= 6106, avg=5632.00, stdev=670.34, samples=2 00:11:45.391 lat (usec) : 500=0.01% 00:11:45.391 lat (msec) : 4=0.29%, 10=18.35%, 20=81.35% 00:11:45.391 cpu : usr=4.49%, sys=15.87%, ctx=609, majf=0, minf=1 00:11:45.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:45.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:45.391 issued rwts: total=5544,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:45.391 job1: (groupid=0, jobs=1): err= 0: pid=77222: Mon Jul 15 14:28:24 2024 00:11:45.391 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:11:45.391 slat (usec): min=6, max=2753, avg=85.60, stdev=383.14 00:11:45.391 clat (usec): min=8500, max=13541, avg=11394.83, stdev=750.16 00:11:45.391 lat (usec): min=8900, max=15023, avg=11480.43, stdev=672.88 00:11:45.391 clat percentiles (usec): 00:11:45.391 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[10945], 00:11:45.391 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:11:45.391 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12387], 00:11:45.391 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13173], 99.95th=[13173], 00:11:45.391 | 99.99th=[13566] 00:11:45.391 write: IOPS=5796, BW=22.6MiB/s (23.7MB/s)(22.7MiB/1003msec); 0 zone resets 00:11:45.391 slat (usec): min=9, max=3367, avg=81.63, stdev=345.60 00:11:45.391 clat (usec): min=274, max=13015, avg=10757.68, stdev=1325.95 00:11:45.391 lat (usec): min=2493, max=13037, avg=10839.31, stdev=1327.57 00:11:45.391 clat percentiles (usec): 00:11:45.391 | 1.00th=[ 6390], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9765], 00:11:45.391 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[11338], 00:11:45.391 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12518], 00:11:45.391 | 99.00th=[12780], 99.50th=[12780], 99.90th=[13042], 99.95th=[13042], 00:11:45.391 | 99.99th=[13042] 00:11:45.391 bw ( KiB/s): min=21536, max=23952, per=33.96%, avg=22744.00, stdev=1708.37, samples=2 00:11:45.391 iops : min= 5384, max= 5988, avg=5686.00, stdev=427.09, samples=2 00:11:45.391 lat (usec) : 500=0.01% 00:11:45.391 lat (msec) : 4=0.33%, 10=19.21%, 20=80.45% 00:11:45.391 cpu : usr=4.89%, sys=16.07%, ctx=584, majf=0, minf=1 00:11:45.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:45.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:45.392 issued rwts: total=5632,5814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:45.392 job2: (groupid=0, jobs=1): err= 0: pid=77223: Mon Jul 15 14:28:24 2024 00:11:45.392 read: IOPS=2266, BW=9065KiB/s (9282kB/s)(9092KiB/1003msec) 00:11:45.392 slat (usec): min=5, max=14829, avg=195.42, stdev=1136.03 00:11:45.392 clat (usec): min=1782, max=64563, avg=25290.82, stdev=12024.76 00:11:45.392 lat (usec): min=6129, max=64580, avg=25486.24, stdev=12031.75 00:11:45.392 clat percentiles (usec): 00:11:45.392 | 1.00th=[ 6456], 5.00th=[15795], 10.00th=[16909], 20.00th=[18744], 00:11:45.392 | 30.00th=[19006], 40.00th=[19268], 50.00th=[21627], 60.00th=[23200], 00:11:45.392 | 70.00th=[23725], 80.00th=[27657], 90.00th=[45876], 95.00th=[56361], 00:11:45.392 | 99.00th=[64226], 99.50th=[64226], 99.90th=[64750], 99.95th=[64750], 00:11:45.392 | 99.99th=[64750] 00:11:45.392 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:11:45.392 slat (usec): min=13, max=25522, avg=210.62, stdev=1378.02 00:11:45.392 clat (usec): min=11802, max=68206, avg=25722.38, stdev=14922.60 00:11:45.392 lat (usec): min=14855, max=68234, avg=25933.01, stdev=15006.00 00:11:45.392 clat percentiles (usec): 00:11:45.392 | 1.00th=[12387], 5.00th=[14877], 10.00th=[15139], 20.00th=[15401], 00:11:45.392 | 30.00th=[15533], 40.00th=[16057], 50.00th=[17957], 60.00th=[19006], 00:11:45.392 | 70.00th=[24249], 80.00th=[43254], 90.00th=[51643], 95.00th=[54264], 00:11:45.392 | 99.00th=[67634], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:11:45.392 | 99.99th=[68682] 00:11:45.392 bw ( KiB/s): min= 8192, max=12288, per=15.29%, avg=10240.00, stdev=2896.31, samples=2 00:11:45.392 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:45.392 lat (msec) : 2=0.02%, 10=0.66%, 20=53.01%, 50=34.72%, 100=11.59% 00:11:45.392 cpu : usr=1.80%, sys=6.49%, ctx=156, majf=0, minf=11 00:11:45.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:45.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:45.392 issued rwts: total=2273,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:45.392 job3: (groupid=0, jobs=1): err= 0: pid=77224: Mon Jul 15 14:28:24 2024 00:11:45.392 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:11:45.392 slat (usec): min=4, max=16458, avg=160.13, stdev=947.86 00:11:45.392 clat (usec): min=8665, max=71383, avg=18698.20, stdev=9035.37 00:11:45.392 lat (usec): min=8681, max=71432, avg=18858.33, stdev=9119.91 00:11:45.392 clat percentiles (usec): 00:11:45.392 | 1.00th=[ 8979], 5.00th=[11994], 10.00th=[13173], 20.00th=[14484], 00:11:45.392 | 30.00th=[14877], 40.00th=[15533], 50.00th=[15795], 60.00th=[16188], 00:11:45.392 | 70.00th=[17433], 80.00th=[20579], 90.00th=[26608], 95.00th=[38011], 00:11:45.392 | 99.00th=[63701], 99.50th=[67634], 99.90th=[71828], 99.95th=[71828], 00:11:45.392 | 99.99th=[71828] 00:11:45.392 write: IOPS=2932, BW=11.5MiB/s (12.0MB/s)(11.6MiB/1014msec); 0 zone resets 00:11:45.392 slat (usec): min=4, max=12573, avg=190.50, stdev=849.58 00:11:45.392 clat (usec): min=4324, max=71315, avg=27075.89, stdev=13879.61 00:11:45.392 lat (usec): min=4359, max=71325, avg=27266.39, stdev=13976.16 00:11:45.392 clat percentiles (usec): 00:11:45.392 | 1.00th=[ 8160], 5.00th=[10159], 10.00th=[13960], 20.00th=[14353], 00:11:45.392 | 30.00th=[15008], 40.00th=[17171], 50.00th=[26084], 60.00th=[28967], 00:11:45.392 | 70.00th=[33162], 80.00th=[38011], 90.00th=[49546], 95.00th=[54789], 00:11:45.392 | 99.00th=[59507], 99.50th=[61080], 99.90th=[61080], 99.95th=[70779], 00:11:45.392 | 99.99th=[71828] 00:11:45.392 bw ( KiB/s): min= 8328, max=14448, per=17.00%, avg=11388.00, stdev=4327.49, samples=2 00:11:45.392 iops : min= 2082, max= 3612, avg=2847.00, stdev=1081.87, samples=2 00:11:45.392 lat (msec) : 10=2.17%, 20=55.29%, 50=36.66%, 100=5.87% 00:11:45.392 cpu : usr=2.86%, sys=7.60%, ctx=325, majf=0, minf=2 00:11:45.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:45.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:45.392 issued rwts: total=2560,2974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:45.392 00:11:45.392 Run status group 0 (all jobs): 00:11:45.392 READ: bw=61.7MiB/s (64.7MB/s), 9065KiB/s-21.9MiB/s (9282kB/s-23.0MB/s), io=62.5MiB (65.6MB), run=1003-1014msec 00:11:45.392 WRITE: bw=65.4MiB/s (68.6MB/s), 9.97MiB/s-22.6MiB/s (10.5MB/s-23.7MB/s), io=66.3MiB (69.6MB), run=1003-1014msec 00:11:45.392 00:11:45.392 Disk stats (read/write): 00:11:45.392 nvme0n1: ios=4658/4755, merge=0/0, ticks=12374/11483, in_queue=23857, util=86.67% 00:11:45.392 nvme0n2: ios=4649/5067, merge=0/0, ticks=11945/11292, in_queue=23237, util=87.13% 00:11:45.392 nvme0n3: ios=1824/2048, merge=0/0, ticks=11119/13904, in_queue=25023, util=88.61% 00:11:45.392 nvme0n4: ios=2167/2560, merge=0/0, ticks=39988/63115, in_queue=103103, util=89.55% 00:11:45.392 14:28:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:45.392 [global] 00:11:45.392 thread=1 00:11:45.392 invalidate=1 00:11:45.392 rw=randwrite 00:11:45.392 time_based=1 00:11:45.392 runtime=1 00:11:45.392 ioengine=libaio 00:11:45.392 direct=1 00:11:45.392 bs=4096 00:11:45.392 iodepth=128 00:11:45.392 norandommap=0 00:11:45.392 numjobs=1 00:11:45.392 00:11:45.392 verify_dump=1 00:11:45.392 verify_backlog=512 00:11:45.392 verify_state_save=0 00:11:45.392 do_verify=1 00:11:45.392 verify=crc32c-intel 00:11:45.392 [job0] 00:11:45.392 filename=/dev/nvme0n1 00:11:45.392 [job1] 00:11:45.392 filename=/dev/nvme0n2 00:11:45.392 [job2] 00:11:45.392 filename=/dev/nvme0n3 00:11:45.392 [job3] 00:11:45.392 filename=/dev/nvme0n4 00:11:45.392 Could not set queue depth (nvme0n1) 00:11:45.392 Could not set queue depth (nvme0n2) 00:11:45.392 Could not set queue depth (nvme0n3) 00:11:45.392 Could not set queue depth (nvme0n4) 00:11:45.392 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.392 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.392 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.392 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.392 fio-3.35 00:11:45.392 Starting 4 threads 00:11:46.767 00:11:46.767 job0: (groupid=0, jobs=1): err= 0: pid=77279: Mon Jul 15 14:28:26 2024 00:11:46.767 read: IOPS=3659, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1009msec) 00:11:46.767 slat (usec): min=4, max=10904, avg=139.98, stdev=811.58 00:11:46.767 clat (usec): min=1467, max=34450, avg=16936.38, stdev=5783.24 00:11:46.767 lat (usec): min=5036, max=34975, avg=17076.36, stdev=5839.73 00:11:46.767 clat percentiles (usec): 00:11:46.767 | 1.00th=[ 6915], 5.00th=[ 9503], 10.00th=[11731], 20.00th=[12518], 00:11:46.767 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13829], 60.00th=[17433], 00:11:46.767 | 70.00th=[20841], 80.00th=[22938], 90.00th=[25035], 95.00th=[27919], 00:11:46.767 | 99.00th=[30802], 99.50th=[31589], 99.90th=[32375], 99.95th=[32900], 00:11:46.767 | 99.99th=[34341] 00:11:46.767 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:11:46.767 slat (usec): min=4, max=10268, avg=112.72, stdev=498.90 00:11:46.767 clat (usec): min=2732, max=33289, avg=15935.49, stdev=5272.60 00:11:46.767 lat (usec): min=2756, max=33309, avg=16048.21, stdev=5311.12 00:11:46.767 clat percentiles (usec): 00:11:46.767 | 1.00th=[ 4817], 5.00th=[ 7504], 10.00th=[11207], 20.00th=[12649], 00:11:46.767 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[15270], 00:11:46.767 | 70.00th=[18744], 80.00th=[21103], 90.00th=[23725], 95.00th=[25035], 00:11:46.767 | 99.00th=[28967], 99.50th=[29754], 99.90th=[32637], 99.95th=[32637], 00:11:46.767 | 99.99th=[33162] 00:11:46.767 bw ( KiB/s): min=12128, max=20521, per=22.85%, avg=16324.50, stdev=5934.75, samples=2 00:11:46.767 iops : min= 3032, max= 5130, avg=4081.00, stdev=1483.51, samples=2 00:11:46.767 lat (msec) : 2=0.01%, 4=0.18%, 10=6.63%, 20=64.43%, 50=28.75% 00:11:46.767 cpu : usr=3.57%, sys=9.23%, ctx=723, majf=0, minf=6 00:11:46.767 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:46.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.767 issued rwts: total=3692,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.767 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.767 job1: (groupid=0, jobs=1): err= 0: pid=77280: Mon Jul 15 14:28:26 2024 00:11:46.767 read: IOPS=5364, BW=21.0MiB/s (22.0MB/s)(21.1MiB/1005msec) 00:11:46.767 slat (usec): min=4, max=10535, avg=93.75, stdev=596.35 00:11:46.767 clat (usec): min=4328, max=22332, avg=12202.03, stdev=2879.13 00:11:46.767 lat (usec): min=4338, max=22357, avg=12295.79, stdev=2908.61 00:11:46.767 clat percentiles (usec): 00:11:46.767 | 1.00th=[ 5932], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10159], 00:11:46.767 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11731], 60.00th=[12125], 00:11:46.767 | 70.00th=[12518], 80.00th=[13960], 90.00th=[16319], 95.00th=[18482], 00:11:46.767 | 99.00th=[21103], 99.50th=[21890], 99.90th=[22152], 99.95th=[22414], 00:11:46.767 | 99.99th=[22414] 00:11:46.767 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:11:46.767 slat (usec): min=4, max=14903, avg=79.83, stdev=436.85 00:11:46.767 clat (usec): min=3815, max=25599, avg=10925.49, stdev=2805.96 00:11:46.767 lat (usec): min=3844, max=25650, avg=11005.32, stdev=2836.43 00:11:46.767 clat percentiles (usec): 00:11:46.767 | 1.00th=[ 4948], 5.00th=[ 5932], 10.00th=[ 7111], 20.00th=[ 9634], 00:11:46.767 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11207], 60.00th=[11469], 00:11:46.767 | 70.00th=[11863], 80.00th=[12518], 90.00th=[12911], 95.00th=[13042], 00:11:46.767 | 99.00th=[23462], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:11:46.767 | 99.99th=[25560] 00:11:46.767 bw ( KiB/s): min=21146, max=23952, per=31.56%, avg=22549.00, stdev=1984.14, samples=2 00:11:46.767 iops : min= 5286, max= 5988, avg=5637.00, stdev=496.39, samples=2 00:11:46.767 lat (msec) : 4=0.03%, 10=22.86%, 20=74.93%, 50=2.18% 00:11:46.767 cpu : usr=5.48%, sys=13.65%, ctx=718, majf=0, minf=7 00:11:46.767 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:46.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.767 issued rwts: total=5391,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.767 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.767 job2: (groupid=0, jobs=1): err= 0: pid=77281: Mon Jul 15 14:28:26 2024 00:11:46.767 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:11:46.767 slat (usec): min=3, max=13027, avg=151.18, stdev=865.83 00:11:46.768 clat (usec): min=5660, max=35378, avg=19522.66, stdev=5825.02 00:11:46.768 lat (usec): min=5673, max=36035, avg=19673.84, stdev=5891.42 00:11:46.768 clat percentiles (usec): 00:11:46.768 | 1.00th=[10683], 5.00th=[11863], 10.00th=[12256], 20.00th=[13960], 00:11:46.768 | 30.00th=[14484], 40.00th=[16319], 50.00th=[19268], 60.00th=[22938], 00:11:46.768 | 70.00th=[24249], 80.00th=[25297], 90.00th=[26608], 95.00th=[27657], 00:11:46.768 | 99.00th=[31589], 99.50th=[32375], 99.90th=[33162], 99.95th=[35390], 00:11:46.768 | 99.99th=[35390] 00:11:46.768 write: IOPS=3657, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1008msec); 0 zone resets 00:11:46.768 slat (usec): min=4, max=11374, avg=116.91, stdev=631.24 00:11:46.768 clat (usec): min=3693, max=31992, avg=15657.92, stdev=4498.47 00:11:46.768 lat (usec): min=4079, max=32024, avg=15774.83, stdev=4549.93 00:11:46.768 clat percentiles (usec): 00:11:46.768 | 1.00th=[ 6325], 5.00th=[ 8356], 10.00th=[10683], 20.00th=[13566], 00:11:46.768 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:11:46.768 | 70.00th=[15401], 80.00th=[19530], 90.00th=[22414], 95.00th=[24511], 00:11:46.768 | 99.00th=[27919], 99.50th=[28705], 99.90th=[31851], 99.95th=[31851], 00:11:46.768 | 99.99th=[32113] 00:11:46.768 bw ( KiB/s): min=10488, max=18220, per=20.09%, avg=14354.00, stdev=5467.35, samples=2 00:11:46.768 iops : min= 2622, max= 4555, avg=3588.50, stdev=1366.84, samples=2 00:11:46.768 lat (msec) : 4=0.01%, 10=4.69%, 20=62.04%, 50=33.26% 00:11:46.768 cpu : usr=3.08%, sys=11.02%, ctx=700, majf=0, minf=5 00:11:46.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:46.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.768 issued rwts: total=3584,3687,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.768 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.768 job3: (groupid=0, jobs=1): err= 0: pid=77282: Mon Jul 15 14:28:26 2024 00:11:46.768 read: IOPS=4184, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1009msec) 00:11:46.768 slat (usec): min=4, max=13246, avg=124.44, stdev=802.79 00:11:46.768 clat (usec): min=1488, max=28643, avg=15524.93, stdev=3993.26 00:11:46.768 lat (usec): min=6009, max=28656, avg=15649.37, stdev=4023.61 00:11:46.768 clat percentiles (usec): 00:11:46.768 | 1.00th=[ 6390], 5.00th=[10945], 10.00th=[11469], 20.00th=[12125], 00:11:46.768 | 30.00th=[13304], 40.00th=[14222], 50.00th=[14484], 60.00th=[15401], 00:11:46.768 | 70.00th=[16712], 80.00th=[18220], 90.00th=[21365], 95.00th=[24249], 00:11:46.768 | 99.00th=[26870], 99.50th=[27395], 99.90th=[28705], 99.95th=[28705], 00:11:46.768 | 99.99th=[28705] 00:11:46.768 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:11:46.768 slat (usec): min=4, max=12372, avg=96.48, stdev=474.75 00:11:46.768 clat (usec): min=4860, max=28660, avg=13499.98, stdev=2712.94 00:11:46.768 lat (usec): min=4880, max=28676, avg=13596.46, stdev=2755.40 00:11:46.768 clat percentiles (usec): 00:11:46.768 | 1.00th=[ 5669], 5.00th=[ 7177], 10.00th=[ 8979], 20.00th=[11731], 00:11:46.768 | 30.00th=[13566], 40.00th=[14222], 50.00th=[14353], 60.00th=[14615], 00:11:46.768 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15533], 95.00th=[15664], 00:11:46.768 | 99.00th=[16450], 99.50th=[16712], 99.90th=[27395], 99.95th=[28181], 00:11:46.768 | 99.99th=[28705] 00:11:46.768 bw ( KiB/s): min=17664, max=19184, per=25.79%, avg=18424.00, stdev=1074.80, samples=2 00:11:46.768 iops : min= 4416, max= 4796, avg=4606.00, stdev=268.70, samples=2 00:11:46.768 lat (msec) : 2=0.01%, 10=7.40%, 20=85.91%, 50=6.68% 00:11:46.768 cpu : usr=5.06%, sys=10.62%, ctx=586, majf=0, minf=5 00:11:46.768 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:46.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.768 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.768 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.768 00:11:46.768 Run status group 0 (all jobs): 00:11:46.768 READ: bw=65.4MiB/s (68.6MB/s), 13.9MiB/s-21.0MiB/s (14.6MB/s-22.0MB/s), io=66.0MiB (69.2MB), run=1005-1009msec 00:11:46.768 WRITE: bw=69.8MiB/s (73.2MB/s), 14.3MiB/s-21.9MiB/s (15.0MB/s-23.0MB/s), io=70.4MiB (73.8MB), run=1005-1009msec 00:11:46.768 00:11:46.768 Disk stats (read/write): 00:11:46.768 nvme0n1: ios=3441/3584, merge=0/0, ticks=43132/41116, in_queue=84248, util=88.98% 00:11:46.768 nvme0n2: ios=4657/4815, merge=0/0, ticks=52454/50408, in_queue=102862, util=89.99% 00:11:46.768 nvme0n3: ios=3111/3474, merge=0/0, ticks=41877/41327, in_queue=83204, util=90.55% 00:11:46.768 nvme0n4: ios=3584/4055, merge=0/0, ticks=52012/52345, in_queue=104357, util=89.89% 00:11:46.768 14:28:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:46.768 14:28:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77295 00:11:46.768 14:28:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:46.768 14:28:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:46.768 [global] 00:11:46.768 thread=1 00:11:46.768 invalidate=1 00:11:46.768 rw=read 00:11:46.768 time_based=1 00:11:46.768 runtime=10 00:11:46.768 ioengine=libaio 00:11:46.768 direct=1 00:11:46.768 bs=4096 00:11:46.768 iodepth=1 00:11:46.768 norandommap=1 00:11:46.768 numjobs=1 00:11:46.768 00:11:46.768 [job0] 00:11:46.768 filename=/dev/nvme0n1 00:11:46.768 [job1] 00:11:46.768 filename=/dev/nvme0n2 00:11:46.768 [job2] 00:11:46.768 filename=/dev/nvme0n3 00:11:46.768 [job3] 00:11:46.768 filename=/dev/nvme0n4 00:11:46.768 Could not set queue depth (nvme0n1) 00:11:46.768 Could not set queue depth (nvme0n2) 00:11:46.768 Could not set queue depth (nvme0n3) 00:11:46.768 Could not set queue depth (nvme0n4) 00:11:46.768 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.768 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.768 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.768 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.768 fio-3.35 00:11:46.768 Starting 4 threads 00:11:50.065 14:28:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:50.065 fio: pid=77338, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:50.065 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=56569856, buflen=4096 00:11:50.065 14:28:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:50.065 fio: pid=77337, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:50.065 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=42180608, buflen=4096 00:11:50.065 14:28:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:50.065 14:28:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:50.323 fio: pid=77335, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:50.323 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1843200, buflen=4096 00:11:50.323 14:28:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:50.323 14:28:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:50.580 fio: pid=77336, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:50.581 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=56107008, buflen=4096 00:11:50.581 14:28:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:50.581 14:28:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:50.839 00:11:50.839 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77335: Mon Jul 15 14:28:30 2024 00:11:50.839 read: IOPS=4957, BW=19.4MiB/s (20.3MB/s)(65.8MiB/3396msec) 00:11:50.839 slat (usec): min=7, max=9775, avg=21.20, stdev=135.45 00:11:50.839 clat (usec): min=3, max=7432, avg=178.55, stdev=73.80 00:11:50.839 lat (usec): min=145, max=10134, avg=199.75, stdev=158.04 00:11:50.839 clat percentiles (usec): 00:11:50.839 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:11:50.839 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:11:50.839 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 206], 95.00th=[ 249], 00:11:50.839 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 594], 99.95th=[ 750], 00:11:50.839 | 99.99th=[ 3064] 00:11:50.839 bw ( KiB/s): min=18864, max=21624, per=34.56%, avg=20209.33, stdev=1090.84, samples=6 00:11:50.839 iops : min= 4716, max= 5406, avg=5052.33, stdev=272.71, samples=6 00:11:50.839 lat (usec) : 4=0.01%, 100=0.01%, 250=95.03%, 500=4.82%, 750=0.08% 00:11:50.839 lat (usec) : 1000=0.01% 00:11:50.839 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01% 00:11:50.839 cpu : usr=1.91%, sys=8.07%, ctx=17127, majf=0, minf=1 00:11:50.839 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.839 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.839 issued rwts: total=16835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.839 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.839 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77336: Mon Jul 15 14:28:30 2024 00:11:50.839 read: IOPS=3664, BW=14.3MiB/s (15.0MB/s)(53.5MiB/3738msec) 00:11:50.839 slat (usec): min=7, max=10740, avg=20.26, stdev=170.97 00:11:50.839 clat (usec): min=2, max=14828, avg=250.68, stdev=146.83 00:11:50.839 lat (usec): min=145, max=14844, avg=270.94, stdev=224.94 00:11:50.839 clat percentiles (usec): 00:11:50.839 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 161], 00:11:50.839 | 30.00th=[ 190], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:11:50.839 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 322], 00:11:50.839 | 99.00th=[ 392], 99.50th=[ 424], 99.90th=[ 627], 99.95th=[ 1270], 00:11:50.839 | 99.99th=[ 3130] 00:11:50.839 bw ( KiB/s): min=12408, max=18476, per=24.44%, avg=14290.86, stdev=2378.55, samples=7 00:11:50.840 iops : min= 3102, max= 4619, avg=3572.71, stdev=594.64, samples=7 00:11:50.840 lat (usec) : 4=0.03%, 10=0.01%, 50=0.01%, 100=0.01%, 250=33.10% 00:11:50.840 lat (usec) : 500=66.62%, 750=0.18% 00:11:50.840 lat (msec) : 2=0.02%, 4=0.02%, 20=0.01% 00:11:50.840 cpu : usr=1.12%, sys=5.35%, ctx=13804, majf=0, minf=1 00:11:50.840 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.840 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.840 issued rwts: total=13699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.840 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.840 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77337: Mon Jul 15 14:28:30 2024 00:11:50.840 read: IOPS=3264, BW=12.8MiB/s (13.4MB/s)(40.2MiB/3155msec) 00:11:50.840 slat (usec): min=7, max=7743, avg=17.60, stdev=103.72 00:11:50.840 clat (usec): min=3, max=3959, avg=286.45, stdev=70.74 00:11:50.840 lat (usec): min=181, max=8002, avg=304.05, stdev=125.15 00:11:50.840 clat percentiles (usec): 00:11:50.840 | 1.00th=[ 194], 5.00th=[ 223], 10.00th=[ 253], 20.00th=[ 273], 00:11:50.840 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:11:50.840 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 326], 00:11:50.840 | 99.00th=[ 400], 99.50th=[ 429], 99.90th=[ 725], 99.95th=[ 1680], 00:11:50.840 | 99.99th=[ 3228] 00:11:50.840 bw ( KiB/s): min=12600, max=13976, per=22.47%, avg=13136.00, stdev=453.68, samples=6 00:11:50.840 iops : min= 3150, max= 3494, avg=3284.00, stdev=113.42, samples=6 00:11:50.840 lat (usec) : 4=0.02%, 250=9.40%, 500=90.29%, 750=0.18%, 1000=0.01% 00:11:50.840 lat (msec) : 2=0.05%, 4=0.04% 00:11:50.840 cpu : usr=1.05%, sys=4.72%, ctx=10554, majf=0, minf=1 00:11:50.840 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.840 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.840 issued rwts: total=10299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.840 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.840 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77338: Mon Jul 15 14:28:30 2024 00:11:50.840 read: IOPS=4751, BW=18.6MiB/s (19.5MB/s)(53.9MiB/2907msec) 00:11:50.840 slat (nsec): min=13824, max=88203, avg=18664.20, stdev=5634.24 00:11:50.840 clat (usec): min=145, max=2048, avg=190.00, stdev=47.81 00:11:50.840 lat (usec): min=160, max=2067, avg=208.66, stdev=48.62 00:11:50.840 clat percentiles (usec): 00:11:50.840 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:11:50.840 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:11:50.840 | 70.00th=[ 190], 80.00th=[ 212], 90.00th=[ 251], 95.00th=[ 277], 00:11:50.840 | 99.00th=[ 330], 99.50th=[ 355], 99.90th=[ 668], 99.95th=[ 717], 00:11:50.840 | 99.99th=[ 1270] 00:11:50.840 bw ( KiB/s): min=17224, max=21624, per=32.35%, avg=18915.20, stdev=1704.33, samples=5 00:11:50.840 iops : min= 4306, max= 5406, avg=4728.80, stdev=426.08, samples=5 00:11:50.840 lat (usec) : 250=89.80%, 500=10.00%, 750=0.16%, 1000=0.02% 00:11:50.840 lat (msec) : 2=0.01%, 4=0.01% 00:11:50.840 cpu : usr=1.48%, sys=7.36%, ctx=13812, majf=0, minf=1 00:11:50.840 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:50.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.840 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.840 issued rwts: total=13812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.840 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:50.840 00:11:50.840 Run status group 0 (all jobs): 00:11:50.840 READ: bw=57.1MiB/s (59.9MB/s), 12.8MiB/s-19.4MiB/s (13.4MB/s-20.3MB/s), io=213MiB (224MB), run=2907-3738msec 00:11:50.840 00:11:50.840 Disk stats (read/write): 00:11:50.840 nvme0n1: ios=16705/0, merge=0/0, ticks=3069/0, in_queue=3069, util=95.56% 00:11:50.840 nvme0n2: ios=13045/0, merge=0/0, ticks=3344/0, in_queue=3344, util=95.80% 00:11:50.840 nvme0n3: ios=10209/0, merge=0/0, ticks=2890/0, in_queue=2890, util=96.12% 00:11:50.840 nvme0n4: ios=13674/0, merge=0/0, ticks=2683/0, in_queue=2683, util=96.77% 00:11:50.840 14:28:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:50.840 14:28:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:51.098 14:28:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:51.098 14:28:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:51.356 14:28:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:51.356 14:28:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:51.615 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:51.615 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:51.874 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:51.874 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77295 00:11:51.874 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:51.874 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.874 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.874 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:52.133 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:52.133 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.133 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.133 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:52.133 nvmf hotplug test: fio failed as expected 00:11:52.133 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:52.133 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:52.133 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:52.133 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:52.391 rmmod nvme_tcp 00:11:52.391 rmmod nvme_fabrics 00:11:52.391 rmmod nvme_keyring 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 76819 ']' 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 76819 00:11:52.391 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 76819 ']' 00:11:52.392 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 76819 00:11:52.392 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:11:52.392 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:52.392 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76819 00:11:52.392 killing process with pid 76819 00:11:52.392 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:52.392 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:52.392 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76819' 00:11:52.392 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 76819 00:11:52.392 14:28:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 76819 00:11:52.651 14:28:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:52.651 14:28:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:52.651 14:28:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:52.651 14:28:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:52.651 14:28:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:52.651 14:28:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.651 14:28:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.651 14:28:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.651 14:28:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:52.651 ************************************ 00:11:52.651 END TEST nvmf_fio_target 00:11:52.651 ************************************ 00:11:52.651 00:11:52.651 real 0m18.835s 00:11:52.651 user 1m12.082s 00:11:52.651 sys 0m9.040s 00:11:52.651 14:28:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.651 14:28:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.651 14:28:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:52.651 14:28:32 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:52.651 14:28:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:52.651 14:28:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.651 14:28:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:52.651 ************************************ 00:11:52.651 START TEST nvmf_bdevio 00:11:52.651 ************************************ 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:52.651 * Looking for test storage... 00:11:52.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:52.651 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:52.909 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:52.909 Cannot find device "nvmf_tgt_br" 00:11:52.909 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:11:52.909 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:52.909 Cannot find device "nvmf_tgt_br2" 00:11:52.909 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:11:52.909 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:52.910 Cannot find device "nvmf_tgt_br" 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:52.910 Cannot find device "nvmf_tgt_br2" 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:52.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:52.910 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:52.910 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:53.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:11:53.168 00:11:53.168 --- 10.0.0.2 ping statistics --- 00:11:53.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.168 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:53.168 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:53.168 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:11:53.168 00:11:53.168 --- 10.0.0.3 ping statistics --- 00:11:53.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.168 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:53.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:53.168 00:11:53.168 --- 10.0.0.1 ping statistics --- 00:11:53.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.168 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=77670 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 77670 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 77670 ']' 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:53.168 14:28:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:53.168 [2024-07-15 14:28:32.633238] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:11:53.168 [2024-07-15 14:28:32.633335] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.427 [2024-07-15 14:28:32.769862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.427 [2024-07-15 14:28:32.855546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.427 [2024-07-15 14:28:32.855604] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.427 [2024-07-15 14:28:32.855616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.427 [2024-07-15 14:28:32.855624] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.427 [2024-07-15 14:28:32.855632] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.427 [2024-07-15 14:28:32.855774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:53.427 [2024-07-15 14:28:32.856763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:53.427 [2024-07-15 14:28:32.856890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:53.427 [2024-07-15 14:28:32.856894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 [2024-07-15 14:28:33.705888] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 Malloc0 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:54.361 [2024-07-15 14:28:33.756672] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:54.361 { 00:11:54.361 "params": { 00:11:54.361 "name": "Nvme$subsystem", 00:11:54.361 "trtype": "$TEST_TRANSPORT", 00:11:54.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:54.361 "adrfam": "ipv4", 00:11:54.361 "trsvcid": "$NVMF_PORT", 00:11:54.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:54.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:54.361 "hdgst": ${hdgst:-false}, 00:11:54.361 "ddgst": ${ddgst:-false} 00:11:54.361 }, 00:11:54.361 "method": "bdev_nvme_attach_controller" 00:11:54.361 } 00:11:54.361 EOF 00:11:54.361 )") 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:54.361 14:28:33 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:54.361 "params": { 00:11:54.361 "name": "Nvme1", 00:11:54.361 "trtype": "tcp", 00:11:54.361 "traddr": "10.0.0.2", 00:11:54.361 "adrfam": "ipv4", 00:11:54.361 "trsvcid": "4420", 00:11:54.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:54.361 "hdgst": false, 00:11:54.361 "ddgst": false 00:11:54.361 }, 00:11:54.361 "method": "bdev_nvme_attach_controller" 00:11:54.361 }' 00:11:54.361 [2024-07-15 14:28:33.808831] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:11:54.361 [2024-07-15 14:28:33.808915] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77724 ] 00:11:54.361 [2024-07-15 14:28:33.946345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:54.619 [2024-07-15 14:28:34.006027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.619 [2024-07-15 14:28:34.006142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.619 [2024-07-15 14:28:34.006149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.619 I/O targets: 00:11:54.619 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:54.619 00:11:54.619 00:11:54.619 CUnit - A unit testing framework for C - Version 2.1-3 00:11:54.619 http://cunit.sourceforge.net/ 00:11:54.619 00:11:54.619 00:11:54.619 Suite: bdevio tests on: Nvme1n1 00:11:54.619 Test: blockdev write read block ...passed 00:11:54.920 Test: blockdev write zeroes read block ...passed 00:11:54.920 Test: blockdev write zeroes read no split ...passed 00:11:54.920 Test: blockdev write zeroes read split ...passed 00:11:54.920 Test: blockdev write zeroes read split partial ...passed 00:11:54.920 Test: blockdev reset ...[2024-07-15 14:28:34.256605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:54.920 [2024-07-15 14:28:34.256766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7d180 (9): Bad file descriptor 00:11:54.920 [2024-07-15 14:28:34.267737] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:54.920 passed 00:11:54.920 Test: blockdev write read 8 blocks ...passed 00:11:54.920 Test: blockdev write read size > 128k ...passed 00:11:54.920 Test: blockdev write read invalid size ...passed 00:11:54.920 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:54.920 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:54.920 Test: blockdev write read max offset ...passed 00:11:54.920 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:54.920 Test: blockdev writev readv 8 blocks ...passed 00:11:54.920 Test: blockdev writev readv 30 x 1block ...passed 00:11:54.920 Test: blockdev writev readv block ...passed 00:11:54.920 Test: blockdev writev readv size > 128k ...passed 00:11:54.920 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:54.920 Test: blockdev comparev and writev ...[2024-07-15 14:28:34.438038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.920 [2024-07-15 14:28:34.438084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:54.920 [2024-07-15 14:28:34.438105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.920 [2024-07-15 14:28:34.438116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:54.920 [2024-07-15 14:28:34.438642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.920 [2024-07-15 14:28:34.438671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:54.920 [2024-07-15 14:28:34.438689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.920 [2024-07-15 14:28:34.438712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:54.920 [2024-07-15 14:28:34.439146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.920 [2024-07-15 14:28:34.439175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:54.920 [2024-07-15 14:28:34.439193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.920 [2024-07-15 14:28:34.439205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:54.920 [2024-07-15 14:28:34.439677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.920 [2024-07-15 14:28:34.439716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:54.920 [2024-07-15 14:28:34.439734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:54.920 [2024-07-15 14:28:34.439744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:54.920 passed 00:11:55.193 Test: blockdev nvme passthru rw ...passed 00:11:55.193 Test: blockdev nvme passthru vendor specific ...[2024-07-15 14:28:34.522047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:55.193 [2024-07-15 14:28:34.522098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:55.193 [2024-07-15 14:28:34.522224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:55.193 [2024-07-15 14:28:34.522242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:55.193 [2024-07-15 14:28:34.522352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:55.193 [2024-07-15 14:28:34.522382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:55.193 [2024-07-15 14:28:34.522502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:55.193 [2024-07-15 14:28:34.522528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:55.193 passed 00:11:55.193 Test: blockdev nvme admin passthru ...passed 00:11:55.193 Test: blockdev copy ...passed 00:11:55.193 00:11:55.193 Run Summary: Type Total Ran Passed Failed Inactive 00:11:55.193 suites 1 1 n/a 0 0 00:11:55.193 tests 23 23 23 0 0 00:11:55.193 asserts 152 152 152 0 n/a 00:11:55.193 00:11:55.193 Elapsed time = 0.883 seconds 00:11:55.193 14:28:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.193 14:28:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.193 14:28:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.193 14:28:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.193 14:28:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:55.193 14:28:34 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:55.193 14:28:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:55.193 14:28:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:55.193 14:28:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:55.193 14:28:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:55.193 14:28:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:55.193 14:28:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:55.193 rmmod nvme_tcp 00:11:55.453 rmmod nvme_fabrics 00:11:55.453 rmmod nvme_keyring 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 77670 ']' 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 77670 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 77670 ']' 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 77670 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77670 00:11:55.453 killing process with pid 77670 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77670' 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 77670 00:11:55.453 14:28:34 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 77670 00:11:55.453 14:28:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.453 14:28:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:55.453 14:28:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:55.453 14:28:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.453 14:28:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.453 14:28:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.453 14:28:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.453 14:28:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.453 14:28:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:55.712 00:11:55.712 real 0m2.927s 00:11:55.712 user 0m10.510s 00:11:55.712 sys 0m0.647s 00:11:55.712 14:28:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.712 14:28:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:55.712 ************************************ 00:11:55.712 END TEST nvmf_bdevio 00:11:55.712 ************************************ 00:11:55.712 14:28:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:55.712 14:28:35 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:55.712 14:28:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:55.712 14:28:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.712 14:28:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:55.712 ************************************ 00:11:55.712 START TEST nvmf_auth_target 00:11:55.712 ************************************ 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:55.712 * Looking for test storage... 00:11:55.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:55.712 Cannot find device "nvmf_tgt_br" 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:55.712 Cannot find device "nvmf_tgt_br2" 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:55.712 Cannot find device "nvmf_tgt_br" 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:55.712 Cannot find device "nvmf_tgt_br2" 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:55.712 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:55.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:55.971 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:55.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:11:55.971 00:11:55.971 --- 10.0.0.2 ping statistics --- 00:11:55.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.971 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:55.971 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:55.971 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:11:55.971 00:11:55.971 --- 10.0.0.3 ping statistics --- 00:11:55.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.971 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:55.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:55.971 00:11:55.971 --- 10.0.0.1 ping statistics --- 00:11:55.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.971 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=77895 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 77895 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77895 ']' 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:55.971 14:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77939 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fa98d2cc7b7abcdff3d17651abcaa484f555be85fa184705 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fvp 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fa98d2cc7b7abcdff3d17651abcaa484f555be85fa184705 0 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fa98d2cc7b7abcdff3d17651abcaa484f555be85fa184705 0 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fa98d2cc7b7abcdff3d17651abcaa484f555be85fa184705 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fvp 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fvp 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.fvp 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=aa1d60e4bdba6b96f50dc8f4ee89660edc922d198fc8c3cfb57eee60c5bc80ba 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.SxX 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key aa1d60e4bdba6b96f50dc8f4ee89660edc922d198fc8c3cfb57eee60c5bc80ba 3 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 aa1d60e4bdba6b96f50dc8f4ee89660edc922d198fc8c3cfb57eee60c5bc80ba 3 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=aa1d60e4bdba6b96f50dc8f4ee89660edc922d198fc8c3cfb57eee60c5bc80ba 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.SxX 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.SxX 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.SxX 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3e440b62c00af75a4dca92bf8fe2f76f 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.L1R 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3e440b62c00af75a4dca92bf8fe2f76f 1 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3e440b62c00af75a4dca92bf8fe2f76f 1 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3e440b62c00af75a4dca92bf8fe2f76f 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.L1R 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.L1R 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.L1R 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=48dcb3a79f120c705203ce244bffab612139c390b2097a0f 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.LZj 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 48dcb3a79f120c705203ce244bffab612139c390b2097a0f 2 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 48dcb3a79f120c705203ce244bffab612139c390b2097a0f 2 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=48dcb3a79f120c705203ce244bffab612139c390b2097a0f 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.LZj 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.LZj 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.LZj 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=869b1c2625682934e0a9bc61a76c0b6498a4eddac1d20dbf 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Uto 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 869b1c2625682934e0a9bc61a76c0b6498a4eddac1d20dbf 2 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 869b1c2625682934e0a9bc61a76c0b6498a4eddac1d20dbf 2 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=869b1c2625682934e0a9bc61a76c0b6498a4eddac1d20dbf 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Uto 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Uto 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Uto 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=63d846c3945dcfa843007c412fe53ec8 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:57.349 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.gOM 00:11:57.350 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 63d846c3945dcfa843007c412fe53ec8 1 00:11:57.350 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 63d846c3945dcfa843007c412fe53ec8 1 00:11:57.350 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:57.350 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:57.350 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=63d846c3945dcfa843007c412fe53ec8 00:11:57.350 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:57.350 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:57.608 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.gOM 00:11:57.608 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.gOM 00:11:57.608 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.gOM 00:11:57.608 14:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:11:57.608 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:57.608 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:57.608 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:57.608 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:57.608 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:57.608 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:57.608 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=acaaa9cd2655fe5e874ce971d1f86ce74a27f216f02581e3c50331bca3ec2413 00:11:57.608 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:57.609 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.08h 00:11:57.609 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key acaaa9cd2655fe5e874ce971d1f86ce74a27f216f02581e3c50331bca3ec2413 3 00:11:57.609 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 acaaa9cd2655fe5e874ce971d1f86ce74a27f216f02581e3c50331bca3ec2413 3 00:11:57.609 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:57.609 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:57.609 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=acaaa9cd2655fe5e874ce971d1f86ce74a27f216f02581e3c50331bca3ec2413 00:11:57.609 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:57.609 14:28:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:57.609 14:28:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.08h 00:11:57.609 14:28:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.08h 00:11:57.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.609 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.08h 00:11:57.609 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:11:57.609 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77895 00:11:57.609 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77895 ']' 00:11:57.609 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.609 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.609 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.609 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.609 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:57.867 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:57.867 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:57.867 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77939 /var/tmp/host.sock 00:11:57.867 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77939 ']' 00:11:57.867 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:57.867 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.867 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:57.867 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.867 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.126 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.126 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:58.126 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:11:58.126 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.126 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.126 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.126 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:58.126 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fvp 00:11:58.126 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.126 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.126 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.126 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.fvp 00:11:58.126 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.fvp 00:11:58.384 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.SxX ]] 00:11:58.384 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SxX 00:11:58.384 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.384 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.384 14:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.384 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SxX 00:11:58.384 14:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SxX 00:11:58.950 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:58.950 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.L1R 00:11:58.950 14:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.950 14:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.950 14:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.950 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.L1R 00:11:58.950 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.L1R 00:11:59.208 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.LZj ]] 00:11:59.208 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LZj 00:11:59.208 14:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.209 14:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.209 14:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.209 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LZj 00:11:59.209 14:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LZj 00:11:59.467 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:59.467 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Uto 00:11:59.467 14:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.467 14:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.724 14:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.724 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Uto 00:11:59.724 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Uto 00:11:59.981 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.gOM ]] 00:11:59.981 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gOM 00:11:59.981 14:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.981 14:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.981 14:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.981 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gOM 00:11:59.981 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gOM 00:12:00.239 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:12:00.239 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.08h 00:12:00.239 14:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.239 14:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.239 14:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.239 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.08h 00:12:00.239 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.08h 00:12:00.497 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:12:00.497 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:00.497 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.497 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.497 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:00.497 14:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:00.755 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:12:00.755 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.755 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:00.755 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:00.755 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:00.755 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.755 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.755 14:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.755 14:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.755 14:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.755 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.755 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.322 00:12:01.322 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.322 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.322 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.579 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.579 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.579 14:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.579 14:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.579 14:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.579 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.579 { 00:12:01.579 "auth": { 00:12:01.579 "dhgroup": "null", 00:12:01.579 "digest": "sha256", 00:12:01.579 "state": "completed" 00:12:01.579 }, 00:12:01.579 "cntlid": 1, 00:12:01.579 "listen_address": { 00:12:01.579 "adrfam": "IPv4", 00:12:01.579 "traddr": "10.0.0.2", 00:12:01.579 "trsvcid": "4420", 00:12:01.579 "trtype": "TCP" 00:12:01.579 }, 00:12:01.579 "peer_address": { 00:12:01.579 "adrfam": "IPv4", 00:12:01.579 "traddr": "10.0.0.1", 00:12:01.579 "trsvcid": "56908", 00:12:01.579 "trtype": "TCP" 00:12:01.579 }, 00:12:01.579 "qid": 0, 00:12:01.579 "state": "enabled", 00:12:01.579 "thread": "nvmf_tgt_poll_group_000" 00:12:01.579 } 00:12:01.579 ]' 00:12:01.579 14:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.579 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:01.579 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.579 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:01.579 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.579 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.579 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.579 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.836 14:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.094 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.351 00:12:07.351 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.351 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.351 14:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.662 { 00:12:07.662 "auth": { 00:12:07.662 "dhgroup": "null", 00:12:07.662 "digest": "sha256", 00:12:07.662 "state": "completed" 00:12:07.662 }, 00:12:07.662 "cntlid": 3, 00:12:07.662 "listen_address": { 00:12:07.662 "adrfam": "IPv4", 00:12:07.662 "traddr": "10.0.0.2", 00:12:07.662 "trsvcid": "4420", 00:12:07.662 "trtype": "TCP" 00:12:07.662 }, 00:12:07.662 "peer_address": { 00:12:07.662 "adrfam": "IPv4", 00:12:07.662 "traddr": "10.0.0.1", 00:12:07.662 "trsvcid": "56930", 00:12:07.662 "trtype": "TCP" 00:12:07.662 }, 00:12:07.662 "qid": 0, 00:12:07.662 "state": "enabled", 00:12:07.662 "thread": "nvmf_tgt_poll_group_000" 00:12:07.662 } 00:12:07.662 ]' 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.662 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.918 14:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.852 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.110 00:12:09.110 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.110 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:09.110 14:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.691 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.692 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.692 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.692 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.692 14:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.692 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.692 { 00:12:09.692 "auth": { 00:12:09.692 "dhgroup": "null", 00:12:09.692 "digest": "sha256", 00:12:09.692 "state": "completed" 00:12:09.692 }, 00:12:09.692 "cntlid": 5, 00:12:09.692 "listen_address": { 00:12:09.692 "adrfam": "IPv4", 00:12:09.692 "traddr": "10.0.0.2", 00:12:09.692 "trsvcid": "4420", 00:12:09.692 "trtype": "TCP" 00:12:09.692 }, 00:12:09.692 "peer_address": { 00:12:09.692 "adrfam": "IPv4", 00:12:09.692 "traddr": "10.0.0.1", 00:12:09.692 "trsvcid": "56954", 00:12:09.692 "trtype": "TCP" 00:12:09.692 }, 00:12:09.692 "qid": 0, 00:12:09.692 "state": "enabled", 00:12:09.692 "thread": "nvmf_tgt_poll_group_000" 00:12:09.692 } 00:12:09.692 ]' 00:12:09.692 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.692 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:09.692 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.692 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:09.692 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.692 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.692 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.692 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.953 14:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:12:10.519 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.519 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:10.519 14:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.519 14:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.519 14:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.519 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.519 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:10.519 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:10.777 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:12:10.777 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.777 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:10.777 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:10.777 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:10.777 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.777 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:12:10.777 14:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.777 14:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.777 14:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.777 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:10.777 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:11.036 00:12:11.294 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.294 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.294 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.551 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.551 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.551 14:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.551 14:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.551 14:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.551 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.551 { 00:12:11.551 "auth": { 00:12:11.551 "dhgroup": "null", 00:12:11.551 "digest": "sha256", 00:12:11.551 "state": "completed" 00:12:11.551 }, 00:12:11.551 "cntlid": 7, 00:12:11.551 "listen_address": { 00:12:11.551 "adrfam": "IPv4", 00:12:11.551 "traddr": "10.0.0.2", 00:12:11.551 "trsvcid": "4420", 00:12:11.551 "trtype": "TCP" 00:12:11.551 }, 00:12:11.551 "peer_address": { 00:12:11.551 "adrfam": "IPv4", 00:12:11.551 "traddr": "10.0.0.1", 00:12:11.551 "trsvcid": "47608", 00:12:11.551 "trtype": "TCP" 00:12:11.551 }, 00:12:11.551 "qid": 0, 00:12:11.551 "state": "enabled", 00:12:11.551 "thread": "nvmf_tgt_poll_group_000" 00:12:11.551 } 00:12:11.551 ]' 00:12:11.551 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.551 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:11.552 14:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.552 14:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:11.552 14:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.552 14:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.552 14:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.552 14:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.809 14:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:12:12.757 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.757 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:12.757 14:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.757 14:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.757 14:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.757 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:12.757 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.757 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:12.757 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:13.016 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:12:13.016 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.016 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:13.016 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:13.016 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:13.016 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.016 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.016 14:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.016 14:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.016 14:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.016 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.016 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.334 00:12:13.334 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.334 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.334 14:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.899 { 00:12:13.899 "auth": { 00:12:13.899 "dhgroup": "ffdhe2048", 00:12:13.899 "digest": "sha256", 00:12:13.899 "state": "completed" 00:12:13.899 }, 00:12:13.899 "cntlid": 9, 00:12:13.899 "listen_address": { 00:12:13.899 "adrfam": "IPv4", 00:12:13.899 "traddr": "10.0.0.2", 00:12:13.899 "trsvcid": "4420", 00:12:13.899 "trtype": "TCP" 00:12:13.899 }, 00:12:13.899 "peer_address": { 00:12:13.899 "adrfam": "IPv4", 00:12:13.899 "traddr": "10.0.0.1", 00:12:13.899 "trsvcid": "47648", 00:12:13.899 "trtype": "TCP" 00:12:13.899 }, 00:12:13.899 "qid": 0, 00:12:13.899 "state": "enabled", 00:12:13.899 "thread": "nvmf_tgt_poll_group_000" 00:12:13.899 } 00:12:13.899 ]' 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.899 14:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.156 14:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.088 14:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.653 00:12:15.653 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.653 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.653 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.910 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.910 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.910 14:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.910 14:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.910 14:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.910 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.910 { 00:12:15.910 "auth": { 00:12:15.910 "dhgroup": "ffdhe2048", 00:12:15.910 "digest": "sha256", 00:12:15.910 "state": "completed" 00:12:15.910 }, 00:12:15.910 "cntlid": 11, 00:12:15.910 "listen_address": { 00:12:15.910 "adrfam": "IPv4", 00:12:15.910 "traddr": "10.0.0.2", 00:12:15.910 "trsvcid": "4420", 00:12:15.910 "trtype": "TCP" 00:12:15.910 }, 00:12:15.910 "peer_address": { 00:12:15.910 "adrfam": "IPv4", 00:12:15.910 "traddr": "10.0.0.1", 00:12:15.910 "trsvcid": "47680", 00:12:15.910 "trtype": "TCP" 00:12:15.910 }, 00:12:15.910 "qid": 0, 00:12:15.910 "state": "enabled", 00:12:15.910 "thread": "nvmf_tgt_poll_group_000" 00:12:15.910 } 00:12:15.910 ]' 00:12:15.910 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.910 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:15.910 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.910 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:15.910 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.168 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.168 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.168 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.425 14:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.358 14:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.616 00:12:17.873 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.873 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.873 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.130 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.130 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.130 14:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.130 14:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.130 14:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.130 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.130 { 00:12:18.130 "auth": { 00:12:18.130 "dhgroup": "ffdhe2048", 00:12:18.130 "digest": "sha256", 00:12:18.130 "state": "completed" 00:12:18.130 }, 00:12:18.130 "cntlid": 13, 00:12:18.130 "listen_address": { 00:12:18.130 "adrfam": "IPv4", 00:12:18.130 "traddr": "10.0.0.2", 00:12:18.130 "trsvcid": "4420", 00:12:18.130 "trtype": "TCP" 00:12:18.130 }, 00:12:18.130 "peer_address": { 00:12:18.130 "adrfam": "IPv4", 00:12:18.130 "traddr": "10.0.0.1", 00:12:18.130 "trsvcid": "47698", 00:12:18.130 "trtype": "TCP" 00:12:18.130 }, 00:12:18.130 "qid": 0, 00:12:18.130 "state": "enabled", 00:12:18.130 "thread": "nvmf_tgt_poll_group_000" 00:12:18.130 } 00:12:18.130 ]' 00:12:18.130 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.130 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:18.130 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.130 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:18.130 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.387 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.387 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.387 14:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.646 14:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:12:19.580 14:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.580 14:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:19.580 14:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.580 14:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.580 14:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.580 14:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.580 14:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:19.580 14:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:19.856 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:12:19.856 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.856 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:19.856 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:19.856 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:19.856 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.856 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:12:19.856 14:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.856 14:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.856 14:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.856 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:19.856 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:20.122 00:12:20.122 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.122 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.122 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.380 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.380 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.380 14:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.380 14:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.380 14:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.380 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.380 { 00:12:20.380 "auth": { 00:12:20.380 "dhgroup": "ffdhe2048", 00:12:20.380 "digest": "sha256", 00:12:20.380 "state": "completed" 00:12:20.380 }, 00:12:20.380 "cntlid": 15, 00:12:20.380 "listen_address": { 00:12:20.380 "adrfam": "IPv4", 00:12:20.380 "traddr": "10.0.0.2", 00:12:20.380 "trsvcid": "4420", 00:12:20.380 "trtype": "TCP" 00:12:20.380 }, 00:12:20.380 "peer_address": { 00:12:20.380 "adrfam": "IPv4", 00:12:20.380 "traddr": "10.0.0.1", 00:12:20.380 "trsvcid": "47722", 00:12:20.380 "trtype": "TCP" 00:12:20.380 }, 00:12:20.380 "qid": 0, 00:12:20.380 "state": "enabled", 00:12:20.380 "thread": "nvmf_tgt_poll_group_000" 00:12:20.380 } 00:12:20.380 ]' 00:12:20.380 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.637 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:20.637 14:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.637 14:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:20.637 14:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.637 14:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.637 14:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.637 14:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.895 14:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.831 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.089 00:12:22.089 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:22.089 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.089 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.348 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.348 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.348 14:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.348 14:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.348 14:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.348 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:22.348 { 00:12:22.348 "auth": { 00:12:22.348 "dhgroup": "ffdhe3072", 00:12:22.348 "digest": "sha256", 00:12:22.348 "state": "completed" 00:12:22.348 }, 00:12:22.348 "cntlid": 17, 00:12:22.348 "listen_address": { 00:12:22.348 "adrfam": "IPv4", 00:12:22.348 "traddr": "10.0.0.2", 00:12:22.348 "trsvcid": "4420", 00:12:22.348 "trtype": "TCP" 00:12:22.348 }, 00:12:22.348 "peer_address": { 00:12:22.348 "adrfam": "IPv4", 00:12:22.348 "traddr": "10.0.0.1", 00:12:22.348 "trsvcid": "40624", 00:12:22.348 "trtype": "TCP" 00:12:22.348 }, 00:12:22.348 "qid": 0, 00:12:22.348 "state": "enabled", 00:12:22.348 "thread": "nvmf_tgt_poll_group_000" 00:12:22.348 } 00:12:22.348 ]' 00:12:22.348 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.606 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:22.606 14:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.606 14:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:22.606 14:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.606 14:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.606 14:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.606 14:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.865 14:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:12:23.803 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.803 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:23.803 14:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.803 14:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.803 14:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.803 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.803 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:23.803 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:24.061 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:12:24.061 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.061 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:24.061 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:24.061 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:24.061 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.061 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.061 14:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.061 14:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.061 14:29:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.061 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.061 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.318 00:12:24.318 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.318 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.318 14:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.577 14:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.577 14:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.577 14:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.577 14:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.577 14:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.577 14:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.577 { 00:12:24.577 "auth": { 00:12:24.577 "dhgroup": "ffdhe3072", 00:12:24.577 "digest": "sha256", 00:12:24.577 "state": "completed" 00:12:24.577 }, 00:12:24.577 "cntlid": 19, 00:12:24.577 "listen_address": { 00:12:24.577 "adrfam": "IPv4", 00:12:24.577 "traddr": "10.0.0.2", 00:12:24.577 "trsvcid": "4420", 00:12:24.577 "trtype": "TCP" 00:12:24.577 }, 00:12:24.577 "peer_address": { 00:12:24.577 "adrfam": "IPv4", 00:12:24.577 "traddr": "10.0.0.1", 00:12:24.577 "trsvcid": "40660", 00:12:24.577 "trtype": "TCP" 00:12:24.577 }, 00:12:24.577 "qid": 0, 00:12:24.577 "state": "enabled", 00:12:24.577 "thread": "nvmf_tgt_poll_group_000" 00:12:24.577 } 00:12:24.577 ]' 00:12:24.577 14:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.577 14:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.577 14:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.577 14:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:24.577 14:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.577 14:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.577 14:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.834 14:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.090 14:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:12:25.655 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.655 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:25.655 14:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.655 14:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.655 14:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.655 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.655 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:25.655 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:25.913 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:12:25.913 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.913 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:25.913 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:25.913 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:25.913 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.913 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.913 14:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.913 14:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.913 14:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.913 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.913 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.172 00:12:26.430 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.430 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.430 14:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.688 { 00:12:26.688 "auth": { 00:12:26.688 "dhgroup": "ffdhe3072", 00:12:26.688 "digest": "sha256", 00:12:26.688 "state": "completed" 00:12:26.688 }, 00:12:26.688 "cntlid": 21, 00:12:26.688 "listen_address": { 00:12:26.688 "adrfam": "IPv4", 00:12:26.688 "traddr": "10.0.0.2", 00:12:26.688 "trsvcid": "4420", 00:12:26.688 "trtype": "TCP" 00:12:26.688 }, 00:12:26.688 "peer_address": { 00:12:26.688 "adrfam": "IPv4", 00:12:26.688 "traddr": "10.0.0.1", 00:12:26.688 "trsvcid": "40674", 00:12:26.688 "trtype": "TCP" 00:12:26.688 }, 00:12:26.688 "qid": 0, 00:12:26.688 "state": "enabled", 00:12:26.688 "thread": "nvmf_tgt_poll_group_000" 00:12:26.688 } 00:12:26.688 ]' 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.688 14:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.253 14:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:12:27.817 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.817 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:27.817 14:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.817 14:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.817 14:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.817 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:27.817 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:27.817 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:28.075 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:12:28.075 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.075 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:28.075 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:28.075 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:28.075 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.075 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:12:28.075 14:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.075 14:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.075 14:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.075 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:28.075 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:28.333 00:12:28.333 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.333 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.333 14:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.592 14:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.592 14:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.592 14:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.592 14:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.592 14:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.592 14:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.592 { 00:12:28.592 "auth": { 00:12:28.592 "dhgroup": "ffdhe3072", 00:12:28.592 "digest": "sha256", 00:12:28.592 "state": "completed" 00:12:28.592 }, 00:12:28.592 "cntlid": 23, 00:12:28.592 "listen_address": { 00:12:28.592 "adrfam": "IPv4", 00:12:28.592 "traddr": "10.0.0.2", 00:12:28.592 "trsvcid": "4420", 00:12:28.592 "trtype": "TCP" 00:12:28.592 }, 00:12:28.592 "peer_address": { 00:12:28.592 "adrfam": "IPv4", 00:12:28.592 "traddr": "10.0.0.1", 00:12:28.592 "trsvcid": "40692", 00:12:28.592 "trtype": "TCP" 00:12:28.592 }, 00:12:28.592 "qid": 0, 00:12:28.592 "state": "enabled", 00:12:28.592 "thread": "nvmf_tgt_poll_group_000" 00:12:28.592 } 00:12:28.592 ]' 00:12:28.592 14:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.850 14:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:28.850 14:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.850 14:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:28.850 14:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:28.850 14:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.850 14:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.850 14:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.108 14:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:12:29.674 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.674 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:29.674 14:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.674 14:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.674 14:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.674 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:29.674 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:29.675 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:29.675 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:29.933 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:12:29.933 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:29.933 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:29.933 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:29.933 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:29.933 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.933 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.933 14:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.933 14:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 14:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.933 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.933 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.499 00:12:30.499 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.499 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.499 14:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:30.757 { 00:12:30.757 "auth": { 00:12:30.757 "dhgroup": "ffdhe4096", 00:12:30.757 "digest": "sha256", 00:12:30.757 "state": "completed" 00:12:30.757 }, 00:12:30.757 "cntlid": 25, 00:12:30.757 "listen_address": { 00:12:30.757 "adrfam": "IPv4", 00:12:30.757 "traddr": "10.0.0.2", 00:12:30.757 "trsvcid": "4420", 00:12:30.757 "trtype": "TCP" 00:12:30.757 }, 00:12:30.757 "peer_address": { 00:12:30.757 "adrfam": "IPv4", 00:12:30.757 "traddr": "10.0.0.1", 00:12:30.757 "trsvcid": "40700", 00:12:30.757 "trtype": "TCP" 00:12:30.757 }, 00:12:30.757 "qid": 0, 00:12:30.757 "state": "enabled", 00:12:30.757 "thread": "nvmf_tgt_poll_group_000" 00:12:30.757 } 00:12:30.757 ]' 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.757 14:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.015 14:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:12:31.954 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.954 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:31.954 14:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.954 14:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.954 14:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.954 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.954 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:31.954 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:32.213 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:12:32.213 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.213 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:32.213 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:32.213 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:32.213 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.213 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.213 14:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.213 14:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.213 14:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.213 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.213 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.471 00:12:32.471 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.471 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.471 14:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.730 14:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.730 14:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.730 14:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.730 14:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.730 14:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.730 14:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:32.730 { 00:12:32.730 "auth": { 00:12:32.730 "dhgroup": "ffdhe4096", 00:12:32.730 "digest": "sha256", 00:12:32.730 "state": "completed" 00:12:32.730 }, 00:12:32.730 "cntlid": 27, 00:12:32.730 "listen_address": { 00:12:32.730 "adrfam": "IPv4", 00:12:32.730 "traddr": "10.0.0.2", 00:12:32.730 "trsvcid": "4420", 00:12:32.730 "trtype": "TCP" 00:12:32.730 }, 00:12:32.730 "peer_address": { 00:12:32.730 "adrfam": "IPv4", 00:12:32.730 "traddr": "10.0.0.1", 00:12:32.730 "trsvcid": "44334", 00:12:32.730 "trtype": "TCP" 00:12:32.730 }, 00:12:32.730 "qid": 0, 00:12:32.730 "state": "enabled", 00:12:32.730 "thread": "nvmf_tgt_poll_group_000" 00:12:32.730 } 00:12:32.730 ]' 00:12:32.730 14:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:32.730 14:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:32.730 14:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:32.730 14:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:32.730 14:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:32.989 14:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.989 14:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.989 14:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.247 14:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:12:33.810 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.810 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:33.810 14:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.810 14:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.810 14:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.810 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.810 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:33.810 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:34.067 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:12:34.067 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.067 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:34.067 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:34.067 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:34.067 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.068 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.068 14:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.068 14:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.068 14:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.068 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.068 14:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.633 00:12:34.633 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.633 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.633 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.892 { 00:12:34.892 "auth": { 00:12:34.892 "dhgroup": "ffdhe4096", 00:12:34.892 "digest": "sha256", 00:12:34.892 "state": "completed" 00:12:34.892 }, 00:12:34.892 "cntlid": 29, 00:12:34.892 "listen_address": { 00:12:34.892 "adrfam": "IPv4", 00:12:34.892 "traddr": "10.0.0.2", 00:12:34.892 "trsvcid": "4420", 00:12:34.892 "trtype": "TCP" 00:12:34.892 }, 00:12:34.892 "peer_address": { 00:12:34.892 "adrfam": "IPv4", 00:12:34.892 "traddr": "10.0.0.1", 00:12:34.892 "trsvcid": "44376", 00:12:34.892 "trtype": "TCP" 00:12:34.892 }, 00:12:34.892 "qid": 0, 00:12:34.892 "state": "enabled", 00:12:34.892 "thread": "nvmf_tgt_poll_group_000" 00:12:34.892 } 00:12:34.892 ]' 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.892 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.150 14:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:12:36.083 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.083 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:36.083 14:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.083 14:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.083 14:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.083 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.083 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:36.084 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:36.343 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:12:36.343 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.343 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:36.343 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:36.343 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:36.343 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.343 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:12:36.343 14:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.343 14:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.343 14:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.343 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:36.343 14:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:36.601 00:12:36.601 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.601 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.601 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.881 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.881 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.881 14:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.881 14:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.881 14:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.881 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.881 { 00:12:36.881 "auth": { 00:12:36.881 "dhgroup": "ffdhe4096", 00:12:36.881 "digest": "sha256", 00:12:36.881 "state": "completed" 00:12:36.881 }, 00:12:36.881 "cntlid": 31, 00:12:36.881 "listen_address": { 00:12:36.881 "adrfam": "IPv4", 00:12:36.881 "traddr": "10.0.0.2", 00:12:36.881 "trsvcid": "4420", 00:12:36.881 "trtype": "TCP" 00:12:36.881 }, 00:12:36.881 "peer_address": { 00:12:36.881 "adrfam": "IPv4", 00:12:36.881 "traddr": "10.0.0.1", 00:12:36.881 "trsvcid": "44394", 00:12:36.881 "trtype": "TCP" 00:12:36.881 }, 00:12:36.881 "qid": 0, 00:12:36.881 "state": "enabled", 00:12:36.881 "thread": "nvmf_tgt_poll_group_000" 00:12:36.881 } 00:12:36.881 ]' 00:12:36.881 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.881 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:36.881 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:37.139 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:37.139 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:37.139 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.139 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.139 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.397 14:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:12:37.964 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.964 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:37.964 14:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.964 14:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.964 14:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.964 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:37.964 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:37.964 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:37.964 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:38.222 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:12:38.222 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.222 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:38.222 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:38.222 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:38.222 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.222 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.222 14:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.222 14:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.222 14:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.223 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.223 14:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.789 00:12:38.789 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.789 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.789 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.047 { 00:12:39.047 "auth": { 00:12:39.047 "dhgroup": "ffdhe6144", 00:12:39.047 "digest": "sha256", 00:12:39.047 "state": "completed" 00:12:39.047 }, 00:12:39.047 "cntlid": 33, 00:12:39.047 "listen_address": { 00:12:39.047 "adrfam": "IPv4", 00:12:39.047 "traddr": "10.0.0.2", 00:12:39.047 "trsvcid": "4420", 00:12:39.047 "trtype": "TCP" 00:12:39.047 }, 00:12:39.047 "peer_address": { 00:12:39.047 "adrfam": "IPv4", 00:12:39.047 "traddr": "10.0.0.1", 00:12:39.047 "trsvcid": "44420", 00:12:39.047 "trtype": "TCP" 00:12:39.047 }, 00:12:39.047 "qid": 0, 00:12:39.047 "state": "enabled", 00:12:39.047 "thread": "nvmf_tgt_poll_group_000" 00:12:39.047 } 00:12:39.047 ]' 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.047 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.615 14:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:12:40.181 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.181 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:40.181 14:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.181 14:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.181 14:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.181 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:40.181 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:40.181 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:40.440 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:12:40.440 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:40.440 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:40.440 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:40.440 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:40.440 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.440 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.440 14:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.440 14:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.440 14:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.440 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.440 14:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.007 00:12:41.007 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.007 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.007 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.264 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.264 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.264 14:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.264 14:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.264 14:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.264 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.264 { 00:12:41.264 "auth": { 00:12:41.264 "dhgroup": "ffdhe6144", 00:12:41.264 "digest": "sha256", 00:12:41.264 "state": "completed" 00:12:41.264 }, 00:12:41.264 "cntlid": 35, 00:12:41.264 "listen_address": { 00:12:41.264 "adrfam": "IPv4", 00:12:41.264 "traddr": "10.0.0.2", 00:12:41.264 "trsvcid": "4420", 00:12:41.264 "trtype": "TCP" 00:12:41.264 }, 00:12:41.264 "peer_address": { 00:12:41.264 "adrfam": "IPv4", 00:12:41.264 "traddr": "10.0.0.1", 00:12:41.264 "trsvcid": "44450", 00:12:41.264 "trtype": "TCP" 00:12:41.264 }, 00:12:41.264 "qid": 0, 00:12:41.264 "state": "enabled", 00:12:41.264 "thread": "nvmf_tgt_poll_group_000" 00:12:41.264 } 00:12:41.264 ]' 00:12:41.264 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.264 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:41.264 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.522 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:41.522 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:41.522 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.522 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.522 14:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.781 14:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:12:42.348 14:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.348 14:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:42.348 14:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.348 14:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.348 14:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.348 14:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.348 14:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:42.348 14:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:42.606 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:12:42.606 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.606 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:42.606 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:42.606 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:42.606 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.606 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.606 14:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.606 14:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.606 14:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.606 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:42.606 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.185 00:12:43.185 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.185 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.185 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.453 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.453 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.453 14:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.453 14:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.453 14:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.453 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.453 { 00:12:43.453 "auth": { 00:12:43.453 "dhgroup": "ffdhe6144", 00:12:43.453 "digest": "sha256", 00:12:43.453 "state": "completed" 00:12:43.453 }, 00:12:43.453 "cntlid": 37, 00:12:43.453 "listen_address": { 00:12:43.453 "adrfam": "IPv4", 00:12:43.453 "traddr": "10.0.0.2", 00:12:43.453 "trsvcid": "4420", 00:12:43.453 "trtype": "TCP" 00:12:43.453 }, 00:12:43.453 "peer_address": { 00:12:43.453 "adrfam": "IPv4", 00:12:43.453 "traddr": "10.0.0.1", 00:12:43.453 "trsvcid": "57426", 00:12:43.453 "trtype": "TCP" 00:12:43.453 }, 00:12:43.453 "qid": 0, 00:12:43.453 "state": "enabled", 00:12:43.453 "thread": "nvmf_tgt_poll_group_000" 00:12:43.453 } 00:12:43.453 ]' 00:12:43.453 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.453 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:43.453 14:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.453 14:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:43.453 14:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.712 14:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.712 14:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.712 14:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.970 14:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:12:44.536 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.537 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:44.537 14:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.537 14:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.537 14:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.537 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.537 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:44.537 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:44.795 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:12:44.795 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:44.795 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:44.795 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:44.795 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:44.795 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.795 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:12:44.795 14:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.795 14:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.795 14:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.795 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:44.795 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:45.362 00:12:45.362 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.362 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.362 14:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.620 14:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.620 14:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.620 14:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.620 14:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.620 14:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.620 14:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.620 { 00:12:45.620 "auth": { 00:12:45.620 "dhgroup": "ffdhe6144", 00:12:45.620 "digest": "sha256", 00:12:45.620 "state": "completed" 00:12:45.620 }, 00:12:45.621 "cntlid": 39, 00:12:45.621 "listen_address": { 00:12:45.621 "adrfam": "IPv4", 00:12:45.621 "traddr": "10.0.0.2", 00:12:45.621 "trsvcid": "4420", 00:12:45.621 "trtype": "TCP" 00:12:45.621 }, 00:12:45.621 "peer_address": { 00:12:45.621 "adrfam": "IPv4", 00:12:45.621 "traddr": "10.0.0.1", 00:12:45.621 "trsvcid": "57454", 00:12:45.621 "trtype": "TCP" 00:12:45.621 }, 00:12:45.621 "qid": 0, 00:12:45.621 "state": "enabled", 00:12:45.621 "thread": "nvmf_tgt_poll_group_000" 00:12:45.621 } 00:12:45.621 ]' 00:12:45.621 14:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.621 14:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:45.621 14:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.621 14:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:45.621 14:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.879 14:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.879 14:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.879 14:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.138 14:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:12:46.705 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.962 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:46.962 14:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.962 14:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.962 14:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.962 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:46.962 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.962 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:46.962 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:47.221 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:12:47.221 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.221 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:47.221 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:47.221 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:47.221 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.221 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.221 14:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.221 14:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.221 14:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.221 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.221 14:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.787 00:12:47.787 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.787 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.787 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.044 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.045 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.045 14:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.045 14:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.045 14:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.045 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.045 { 00:12:48.045 "auth": { 00:12:48.045 "dhgroup": "ffdhe8192", 00:12:48.045 "digest": "sha256", 00:12:48.045 "state": "completed" 00:12:48.045 }, 00:12:48.045 "cntlid": 41, 00:12:48.045 "listen_address": { 00:12:48.045 "adrfam": "IPv4", 00:12:48.045 "traddr": "10.0.0.2", 00:12:48.045 "trsvcid": "4420", 00:12:48.045 "trtype": "TCP" 00:12:48.045 }, 00:12:48.045 "peer_address": { 00:12:48.045 "adrfam": "IPv4", 00:12:48.045 "traddr": "10.0.0.1", 00:12:48.045 "trsvcid": "57484", 00:12:48.045 "trtype": "TCP" 00:12:48.045 }, 00:12:48.045 "qid": 0, 00:12:48.045 "state": "enabled", 00:12:48.045 "thread": "nvmf_tgt_poll_group_000" 00:12:48.045 } 00:12:48.045 ]' 00:12:48.045 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.045 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:48.045 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.302 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:48.302 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.302 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.302 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.302 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.592 14:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:12:49.157 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.157 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:49.157 14:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.157 14:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.157 14:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.157 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.157 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:49.157 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:49.414 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:12:49.414 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.414 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:49.414 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:49.414 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:49.414 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.414 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.414 14:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.414 14:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.414 14:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.414 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.414 14:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.348 00:12:50.348 14:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.348 14:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.348 14:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.348 14:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.348 14:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.348 14:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.348 14:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.348 14:29:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.348 14:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.348 { 00:12:50.348 "auth": { 00:12:50.348 "dhgroup": "ffdhe8192", 00:12:50.348 "digest": "sha256", 00:12:50.348 "state": "completed" 00:12:50.348 }, 00:12:50.348 "cntlid": 43, 00:12:50.348 "listen_address": { 00:12:50.348 "adrfam": "IPv4", 00:12:50.348 "traddr": "10.0.0.2", 00:12:50.348 "trsvcid": "4420", 00:12:50.348 "trtype": "TCP" 00:12:50.348 }, 00:12:50.348 "peer_address": { 00:12:50.348 "adrfam": "IPv4", 00:12:50.348 "traddr": "10.0.0.1", 00:12:50.348 "trsvcid": "57524", 00:12:50.348 "trtype": "TCP" 00:12:50.348 }, 00:12:50.348 "qid": 0, 00:12:50.348 "state": "enabled", 00:12:50.348 "thread": "nvmf_tgt_poll_group_000" 00:12:50.348 } 00:12:50.348 ]' 00:12:50.348 14:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.605 14:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:50.605 14:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.605 14:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:50.605 14:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.605 14:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.605 14:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.605 14:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.862 14:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.795 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.731 00:12:52.731 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.731 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.731 14:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:52.731 14:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.731 14:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.731 14:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.731 14:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.731 14:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.731 14:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:52.731 { 00:12:52.731 "auth": { 00:12:52.731 "dhgroup": "ffdhe8192", 00:12:52.731 "digest": "sha256", 00:12:52.731 "state": "completed" 00:12:52.731 }, 00:12:52.731 "cntlid": 45, 00:12:52.731 "listen_address": { 00:12:52.731 "adrfam": "IPv4", 00:12:52.731 "traddr": "10.0.0.2", 00:12:52.731 "trsvcid": "4420", 00:12:52.731 "trtype": "TCP" 00:12:52.731 }, 00:12:52.731 "peer_address": { 00:12:52.731 "adrfam": "IPv4", 00:12:52.731 "traddr": "10.0.0.1", 00:12:52.731 "trsvcid": "41462", 00:12:52.731 "trtype": "TCP" 00:12:52.731 }, 00:12:52.731 "qid": 0, 00:12:52.731 "state": "enabled", 00:12:52.731 "thread": "nvmf_tgt_poll_group_000" 00:12:52.731 } 00:12:52.731 ]' 00:12:52.731 14:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.731 14:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:52.731 14:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.989 14:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:52.990 14:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.990 14:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.990 14:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.990 14:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.249 14:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:12:53.824 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.824 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:53.824 14:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.824 14:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.824 14:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.824 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:53.824 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:53.824 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:54.087 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:12:54.087 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.087 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:54.087 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:54.087 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:54.087 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.087 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:12:54.087 14:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.087 14:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.087 14:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.087 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:54.087 14:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:55.018 00:12:55.018 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.018 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.018 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.276 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.276 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.276 14:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.276 14:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.276 14:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.276 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.276 { 00:12:55.276 "auth": { 00:12:55.276 "dhgroup": "ffdhe8192", 00:12:55.276 "digest": "sha256", 00:12:55.276 "state": "completed" 00:12:55.276 }, 00:12:55.276 "cntlid": 47, 00:12:55.276 "listen_address": { 00:12:55.276 "adrfam": "IPv4", 00:12:55.276 "traddr": "10.0.0.2", 00:12:55.276 "trsvcid": "4420", 00:12:55.276 "trtype": "TCP" 00:12:55.276 }, 00:12:55.276 "peer_address": { 00:12:55.276 "adrfam": "IPv4", 00:12:55.276 "traddr": "10.0.0.1", 00:12:55.276 "trsvcid": "41488", 00:12:55.277 "trtype": "TCP" 00:12:55.277 }, 00:12:55.277 "qid": 0, 00:12:55.277 "state": "enabled", 00:12:55.277 "thread": "nvmf_tgt_poll_group_000" 00:12:55.277 } 00:12:55.277 ]' 00:12:55.277 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.277 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.277 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.277 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:55.277 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.534 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.534 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.534 14:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.791 14:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:12:56.356 14:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.356 14:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:56.356 14:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.356 14:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.356 14:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.356 14:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:56.356 14:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:56.356 14:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:56.356 14:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:56.356 14:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:56.615 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:12:56.615 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:56.615 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:56.615 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:56.615 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:56.615 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.615 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.615 14:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.615 14:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.615 14:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.615 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.615 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.873 00:12:56.873 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.873 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.873 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.131 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.131 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.131 14:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.131 14:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.131 14:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.131 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:57.131 { 00:12:57.131 "auth": { 00:12:57.131 "dhgroup": "null", 00:12:57.131 "digest": "sha384", 00:12:57.131 "state": "completed" 00:12:57.131 }, 00:12:57.131 "cntlid": 49, 00:12:57.131 "listen_address": { 00:12:57.131 "adrfam": "IPv4", 00:12:57.131 "traddr": "10.0.0.2", 00:12:57.131 "trsvcid": "4420", 00:12:57.131 "trtype": "TCP" 00:12:57.131 }, 00:12:57.131 "peer_address": { 00:12:57.131 "adrfam": "IPv4", 00:12:57.131 "traddr": "10.0.0.1", 00:12:57.131 "trsvcid": "41530", 00:12:57.131 "trtype": "TCP" 00:12:57.131 }, 00:12:57.131 "qid": 0, 00:12:57.131 "state": "enabled", 00:12:57.131 "thread": "nvmf_tgt_poll_group_000" 00:12:57.131 } 00:12:57.131 ]' 00:12:57.131 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:57.388 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:57.388 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:57.388 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:57.388 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:57.388 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.389 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.389 14:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.647 14:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:12:58.580 14:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.580 14:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:12:58.580 14:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.580 14:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.580 14:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.580 14:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:58.580 14:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:58.580 14:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:58.580 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:12:58.580 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:58.580 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:58.580 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:58.580 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:58.580 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.580 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.580 14:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.580 14:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.580 14:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.580 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.580 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.149 00:12:59.149 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:59.149 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.149 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:59.412 { 00:12:59.412 "auth": { 00:12:59.412 "dhgroup": "null", 00:12:59.412 "digest": "sha384", 00:12:59.412 "state": "completed" 00:12:59.412 }, 00:12:59.412 "cntlid": 51, 00:12:59.412 "listen_address": { 00:12:59.412 "adrfam": "IPv4", 00:12:59.412 "traddr": "10.0.0.2", 00:12:59.412 "trsvcid": "4420", 00:12:59.412 "trtype": "TCP" 00:12:59.412 }, 00:12:59.412 "peer_address": { 00:12:59.412 "adrfam": "IPv4", 00:12:59.412 "traddr": "10.0.0.1", 00:12:59.412 "trsvcid": "41548", 00:12:59.412 "trtype": "TCP" 00:12:59.412 }, 00:12:59.412 "qid": 0, 00:12:59.412 "state": "enabled", 00:12:59.412 "thread": "nvmf_tgt_poll_group_000" 00:12:59.412 } 00:12:59.412 ]' 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.412 14:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.978 14:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:13:00.543 14:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.543 14:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:00.543 14:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.543 14:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.543 14:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.543 14:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:00.543 14:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:00.543 14:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:00.801 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:13:00.801 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.801 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:00.801 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:00.801 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:00.801 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.801 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.801 14:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.801 14:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.801 14:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.801 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.801 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.059 00:13:01.059 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.060 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.060 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.318 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.318 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.318 14:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.318 14:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.318 14:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.318 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.318 { 00:13:01.318 "auth": { 00:13:01.318 "dhgroup": "null", 00:13:01.318 "digest": "sha384", 00:13:01.318 "state": "completed" 00:13:01.318 }, 00:13:01.318 "cntlid": 53, 00:13:01.318 "listen_address": { 00:13:01.318 "adrfam": "IPv4", 00:13:01.318 "traddr": "10.0.0.2", 00:13:01.318 "trsvcid": "4420", 00:13:01.318 "trtype": "TCP" 00:13:01.318 }, 00:13:01.318 "peer_address": { 00:13:01.318 "adrfam": "IPv4", 00:13:01.318 "traddr": "10.0.0.1", 00:13:01.318 "trsvcid": "41566", 00:13:01.318 "trtype": "TCP" 00:13:01.318 }, 00:13:01.318 "qid": 0, 00:13:01.318 "state": "enabled", 00:13:01.318 "thread": "nvmf_tgt_poll_group_000" 00:13:01.318 } 00:13:01.318 ]' 00:13:01.318 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.318 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:01.318 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.318 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:01.318 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.576 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.576 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.576 14:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.834 14:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:13:02.400 14:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.659 14:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:02.659 14:29:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.659 14:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.659 14:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.659 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.659 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:02.659 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:02.917 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:13:02.917 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.917 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:02.917 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:02.917 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:02.917 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.917 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:13:02.917 14:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.917 14:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.917 14:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.917 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.917 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:03.175 00:13:03.175 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.175 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.175 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.433 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.433 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.433 14:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.433 14:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.433 14:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.433 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.433 { 00:13:03.433 "auth": { 00:13:03.433 "dhgroup": "null", 00:13:03.433 "digest": "sha384", 00:13:03.433 "state": "completed" 00:13:03.433 }, 00:13:03.433 "cntlid": 55, 00:13:03.433 "listen_address": { 00:13:03.433 "adrfam": "IPv4", 00:13:03.433 "traddr": "10.0.0.2", 00:13:03.433 "trsvcid": "4420", 00:13:03.433 "trtype": "TCP" 00:13:03.433 }, 00:13:03.433 "peer_address": { 00:13:03.433 "adrfam": "IPv4", 00:13:03.433 "traddr": "10.0.0.1", 00:13:03.433 "trsvcid": "58996", 00:13:03.434 "trtype": "TCP" 00:13:03.434 }, 00:13:03.434 "qid": 0, 00:13:03.434 "state": "enabled", 00:13:03.434 "thread": "nvmf_tgt_poll_group_000" 00:13:03.434 } 00:13:03.434 ]' 00:13:03.434 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.434 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:03.434 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.434 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:03.434 14:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.692 14:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.692 14:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.692 14:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.949 14:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:13:04.538 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.538 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:04.538 14:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.538 14:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.538 14:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.538 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:04.538 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:04.538 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:04.538 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:04.813 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:13:04.813 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.813 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:04.813 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:04.813 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:04.813 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.813 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.813 14:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.813 14:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.813 14:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.813 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.813 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.075 00:13:05.075 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.075 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.075 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.333 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.333 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.333 14:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.333 14:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.333 14:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.333 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.333 { 00:13:05.333 "auth": { 00:13:05.333 "dhgroup": "ffdhe2048", 00:13:05.333 "digest": "sha384", 00:13:05.333 "state": "completed" 00:13:05.333 }, 00:13:05.333 "cntlid": 57, 00:13:05.333 "listen_address": { 00:13:05.333 "adrfam": "IPv4", 00:13:05.333 "traddr": "10.0.0.2", 00:13:05.333 "trsvcid": "4420", 00:13:05.333 "trtype": "TCP" 00:13:05.333 }, 00:13:05.333 "peer_address": { 00:13:05.333 "adrfam": "IPv4", 00:13:05.333 "traddr": "10.0.0.1", 00:13:05.333 "trsvcid": "59010", 00:13:05.333 "trtype": "TCP" 00:13:05.333 }, 00:13:05.333 "qid": 0, 00:13:05.333 "state": "enabled", 00:13:05.333 "thread": "nvmf_tgt_poll_group_000" 00:13:05.333 } 00:13:05.333 ]' 00:13:05.333 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.591 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:05.591 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.591 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:05.591 14:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.591 14:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.591 14:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.591 14:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.849 14:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.784 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.349 00:13:07.349 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.349 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.349 14:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.606 14:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.606 14:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.606 14:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.606 14:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.606 14:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.606 14:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.606 { 00:13:07.606 "auth": { 00:13:07.606 "dhgroup": "ffdhe2048", 00:13:07.606 "digest": "sha384", 00:13:07.606 "state": "completed" 00:13:07.606 }, 00:13:07.606 "cntlid": 59, 00:13:07.606 "listen_address": { 00:13:07.606 "adrfam": "IPv4", 00:13:07.606 "traddr": "10.0.0.2", 00:13:07.606 "trsvcid": "4420", 00:13:07.606 "trtype": "TCP" 00:13:07.606 }, 00:13:07.606 "peer_address": { 00:13:07.606 "adrfam": "IPv4", 00:13:07.606 "traddr": "10.0.0.1", 00:13:07.606 "trsvcid": "59034", 00:13:07.606 "trtype": "TCP" 00:13:07.606 }, 00:13:07.606 "qid": 0, 00:13:07.606 "state": "enabled", 00:13:07.606 "thread": "nvmf_tgt_poll_group_000" 00:13:07.606 } 00:13:07.606 ]' 00:13:07.606 14:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.606 14:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:07.606 14:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.606 14:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:07.606 14:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:07.863 14:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.863 14:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.863 14:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.120 14:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:13:08.733 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.733 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:08.733 14:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.733 14:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.733 14:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.733 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:08.733 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:08.733 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:08.990 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:13:08.990 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:08.990 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:08.990 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:08.990 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:08.990 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.991 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.991 14:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.991 14:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.991 14:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.991 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.991 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.247 00:13:09.247 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:09.247 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.247 14:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:09.505 14:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.505 14:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.505 14:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.505 14:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.505 14:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.505 14:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.505 { 00:13:09.505 "auth": { 00:13:09.505 "dhgroup": "ffdhe2048", 00:13:09.505 "digest": "sha384", 00:13:09.505 "state": "completed" 00:13:09.505 }, 00:13:09.505 "cntlid": 61, 00:13:09.505 "listen_address": { 00:13:09.505 "adrfam": "IPv4", 00:13:09.505 "traddr": "10.0.0.2", 00:13:09.505 "trsvcid": "4420", 00:13:09.505 "trtype": "TCP" 00:13:09.505 }, 00:13:09.505 "peer_address": { 00:13:09.505 "adrfam": "IPv4", 00:13:09.505 "traddr": "10.0.0.1", 00:13:09.505 "trsvcid": "59064", 00:13:09.505 "trtype": "TCP" 00:13:09.505 }, 00:13:09.505 "qid": 0, 00:13:09.505 "state": "enabled", 00:13:09.505 "thread": "nvmf_tgt_poll_group_000" 00:13:09.505 } 00:13:09.505 ]' 00:13:09.505 14:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.762 14:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:09.762 14:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:09.762 14:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:09.762 14:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:09.762 14:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.762 14:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.762 14:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.071 14:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:13:10.643 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.643 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:10.643 14:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.643 14:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.643 14:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.643 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:10.643 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:10.643 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:10.902 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:13:10.902 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:10.902 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:10.902 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:10.902 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:10.902 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.902 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:13:10.902 14:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.902 14:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.902 14:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.902 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.902 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:11.468 00:13:11.468 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:11.468 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:11.468 14:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:11.727 { 00:13:11.727 "auth": { 00:13:11.727 "dhgroup": "ffdhe2048", 00:13:11.727 "digest": "sha384", 00:13:11.727 "state": "completed" 00:13:11.727 }, 00:13:11.727 "cntlid": 63, 00:13:11.727 "listen_address": { 00:13:11.727 "adrfam": "IPv4", 00:13:11.727 "traddr": "10.0.0.2", 00:13:11.727 "trsvcid": "4420", 00:13:11.727 "trtype": "TCP" 00:13:11.727 }, 00:13:11.727 "peer_address": { 00:13:11.727 "adrfam": "IPv4", 00:13:11.727 "traddr": "10.0.0.1", 00:13:11.727 "trsvcid": "49482", 00:13:11.727 "trtype": "TCP" 00:13:11.727 }, 00:13:11.727 "qid": 0, 00:13:11.727 "state": "enabled", 00:13:11.727 "thread": "nvmf_tgt_poll_group_000" 00:13:11.727 } 00:13:11.727 ]' 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.727 14:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.294 14:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:13:12.861 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.861 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:12.861 14:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.861 14:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.861 14:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.861 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:12.861 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:12.861 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:12.861 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:13.119 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:13:13.119 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:13.119 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:13.119 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:13.119 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:13.119 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.119 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.119 14:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.119 14:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.119 14:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.119 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.119 14:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.685 00:13:13.685 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:13.685 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.685 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.943 { 00:13:13.943 "auth": { 00:13:13.943 "dhgroup": "ffdhe3072", 00:13:13.943 "digest": "sha384", 00:13:13.943 "state": "completed" 00:13:13.943 }, 00:13:13.943 "cntlid": 65, 00:13:13.943 "listen_address": { 00:13:13.943 "adrfam": "IPv4", 00:13:13.943 "traddr": "10.0.0.2", 00:13:13.943 "trsvcid": "4420", 00:13:13.943 "trtype": "TCP" 00:13:13.943 }, 00:13:13.943 "peer_address": { 00:13:13.943 "adrfam": "IPv4", 00:13:13.943 "traddr": "10.0.0.1", 00:13:13.943 "trsvcid": "49514", 00:13:13.943 "trtype": "TCP" 00:13:13.943 }, 00:13:13.943 "qid": 0, 00:13:13.943 "state": "enabled", 00:13:13.943 "thread": "nvmf_tgt_poll_group_000" 00:13:13.943 } 00:13:13.943 ]' 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.943 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.200 14:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:13:15.135 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.135 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:15.135 14:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.135 14:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.135 14:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.135 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:15.135 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:15.135 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:15.421 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:13:15.421 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:15.421 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:15.421 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:15.421 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:15.421 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.421 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.421 14:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.421 14:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.421 14:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.421 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.421 14:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.721 00:13:15.721 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:15.721 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:15.721 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.979 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.979 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.979 14:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.979 14:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.979 14:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.979 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:15.979 { 00:13:15.979 "auth": { 00:13:15.979 "dhgroup": "ffdhe3072", 00:13:15.979 "digest": "sha384", 00:13:15.979 "state": "completed" 00:13:15.979 }, 00:13:15.979 "cntlid": 67, 00:13:15.979 "listen_address": { 00:13:15.979 "adrfam": "IPv4", 00:13:15.979 "traddr": "10.0.0.2", 00:13:15.979 "trsvcid": "4420", 00:13:15.979 "trtype": "TCP" 00:13:15.979 }, 00:13:15.979 "peer_address": { 00:13:15.979 "adrfam": "IPv4", 00:13:15.979 "traddr": "10.0.0.1", 00:13:15.979 "trsvcid": "49542", 00:13:15.979 "trtype": "TCP" 00:13:15.979 }, 00:13:15.979 "qid": 0, 00:13:15.979 "state": "enabled", 00:13:15.979 "thread": "nvmf_tgt_poll_group_000" 00:13:15.979 } 00:13:15.979 ]' 00:13:15.979 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:15.979 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:15.979 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:16.237 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:16.237 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:16.237 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.237 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.237 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.496 14:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.432 14:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.998 00:13:17.998 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:17.998 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:17.998 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:18.256 { 00:13:18.256 "auth": { 00:13:18.256 "dhgroup": "ffdhe3072", 00:13:18.256 "digest": "sha384", 00:13:18.256 "state": "completed" 00:13:18.256 }, 00:13:18.256 "cntlid": 69, 00:13:18.256 "listen_address": { 00:13:18.256 "adrfam": "IPv4", 00:13:18.256 "traddr": "10.0.0.2", 00:13:18.256 "trsvcid": "4420", 00:13:18.256 "trtype": "TCP" 00:13:18.256 }, 00:13:18.256 "peer_address": { 00:13:18.256 "adrfam": "IPv4", 00:13:18.256 "traddr": "10.0.0.1", 00:13:18.256 "trsvcid": "49572", 00:13:18.256 "trtype": "TCP" 00:13:18.256 }, 00:13:18.256 "qid": 0, 00:13:18.256 "state": "enabled", 00:13:18.256 "thread": "nvmf_tgt_poll_group_000" 00:13:18.256 } 00:13:18.256 ]' 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.256 14:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.514 14:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:13:19.473 14:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.473 14:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:19.473 14:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.473 14:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.473 14:29:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.473 14:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:19.473 14:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:19.473 14:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:19.473 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:13:19.473 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:19.473 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:19.473 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:19.473 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:19.473 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.473 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:13:19.473 14:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.473 14:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.473 14:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.473 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:19.473 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:20.039 00:13:20.039 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:20.039 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:20.039 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:20.297 { 00:13:20.297 "auth": { 00:13:20.297 "dhgroup": "ffdhe3072", 00:13:20.297 "digest": "sha384", 00:13:20.297 "state": "completed" 00:13:20.297 }, 00:13:20.297 "cntlid": 71, 00:13:20.297 "listen_address": { 00:13:20.297 "adrfam": "IPv4", 00:13:20.297 "traddr": "10.0.0.2", 00:13:20.297 "trsvcid": "4420", 00:13:20.297 "trtype": "TCP" 00:13:20.297 }, 00:13:20.297 "peer_address": { 00:13:20.297 "adrfam": "IPv4", 00:13:20.297 "traddr": "10.0.0.1", 00:13:20.297 "trsvcid": "49598", 00:13:20.297 "trtype": "TCP" 00:13:20.297 }, 00:13:20.297 "qid": 0, 00:13:20.297 "state": "enabled", 00:13:20.297 "thread": "nvmf_tgt_poll_group_000" 00:13:20.297 } 00:13:20.297 ]' 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.297 14:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.621 14:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:13:21.554 14:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.554 14:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:21.554 14:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.554 14:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.554 14:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.554 14:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:21.554 14:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:21.554 14:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:21.554 14:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:21.554 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:13:21.554 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:21.554 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:21.554 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:21.554 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:21.554 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.554 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.554 14:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.554 14:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.554 14:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.554 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.554 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.120 00:13:22.120 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:22.120 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.120 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:22.379 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.379 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.379 14:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.379 14:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.379 14:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.379 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:22.379 { 00:13:22.379 "auth": { 00:13:22.379 "dhgroup": "ffdhe4096", 00:13:22.379 "digest": "sha384", 00:13:22.379 "state": "completed" 00:13:22.379 }, 00:13:22.379 "cntlid": 73, 00:13:22.379 "listen_address": { 00:13:22.379 "adrfam": "IPv4", 00:13:22.379 "traddr": "10.0.0.2", 00:13:22.379 "trsvcid": "4420", 00:13:22.379 "trtype": "TCP" 00:13:22.379 }, 00:13:22.379 "peer_address": { 00:13:22.379 "adrfam": "IPv4", 00:13:22.379 "traddr": "10.0.0.1", 00:13:22.379 "trsvcid": "50306", 00:13:22.379 "trtype": "TCP" 00:13:22.379 }, 00:13:22.379 "qid": 0, 00:13:22.379 "state": "enabled", 00:13:22.379 "thread": "nvmf_tgt_poll_group_000" 00:13:22.379 } 00:13:22.379 ]' 00:13:22.379 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:22.379 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:22.379 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:22.379 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:22.379 14:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:22.638 14:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.638 14:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.638 14:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.897 14:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.837 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.403 00:13:24.403 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:24.403 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:24.403 14:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:24.662 { 00:13:24.662 "auth": { 00:13:24.662 "dhgroup": "ffdhe4096", 00:13:24.662 "digest": "sha384", 00:13:24.662 "state": "completed" 00:13:24.662 }, 00:13:24.662 "cntlid": 75, 00:13:24.662 "listen_address": { 00:13:24.662 "adrfam": "IPv4", 00:13:24.662 "traddr": "10.0.0.2", 00:13:24.662 "trsvcid": "4420", 00:13:24.662 "trtype": "TCP" 00:13:24.662 }, 00:13:24.662 "peer_address": { 00:13:24.662 "adrfam": "IPv4", 00:13:24.662 "traddr": "10.0.0.1", 00:13:24.662 "trsvcid": "50338", 00:13:24.662 "trtype": "TCP" 00:13:24.662 }, 00:13:24.662 "qid": 0, 00:13:24.662 "state": "enabled", 00:13:24.662 "thread": "nvmf_tgt_poll_group_000" 00:13:24.662 } 00:13:24.662 ]' 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.662 14:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.921 14:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:13:25.486 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.486 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:25.486 14:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.486 14:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.749 14:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.749 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:25.749 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:25.749 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:25.749 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:13:25.749 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:25.749 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:25.749 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:25.749 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:25.749 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.749 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.749 14:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.749 14:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.008 14:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.008 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.008 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.266 00:13:26.266 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:26.266 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.266 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:26.524 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.524 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.524 14:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.524 14:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.524 14:30:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.524 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:26.524 { 00:13:26.524 "auth": { 00:13:26.524 "dhgroup": "ffdhe4096", 00:13:26.524 "digest": "sha384", 00:13:26.524 "state": "completed" 00:13:26.524 }, 00:13:26.524 "cntlid": 77, 00:13:26.524 "listen_address": { 00:13:26.524 "adrfam": "IPv4", 00:13:26.524 "traddr": "10.0.0.2", 00:13:26.524 "trsvcid": "4420", 00:13:26.524 "trtype": "TCP" 00:13:26.524 }, 00:13:26.524 "peer_address": { 00:13:26.524 "adrfam": "IPv4", 00:13:26.524 "traddr": "10.0.0.1", 00:13:26.524 "trsvcid": "50380", 00:13:26.524 "trtype": "TCP" 00:13:26.524 }, 00:13:26.524 "qid": 0, 00:13:26.524 "state": "enabled", 00:13:26.524 "thread": "nvmf_tgt_poll_group_000" 00:13:26.524 } 00:13:26.524 ]' 00:13:26.524 14:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:26.524 14:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:26.524 14:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:26.524 14:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:26.524 14:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:26.524 14:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.524 14:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.524 14:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.091 14:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:13:27.658 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.658 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:27.658 14:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.658 14:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.658 14:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.658 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.658 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:27.658 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:27.917 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:13:27.917 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.917 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:27.917 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:27.917 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:27.917 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.917 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:13:27.917 14:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.917 14:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.917 14:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.917 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.917 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:28.176 00:13:28.176 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:28.176 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:28.176 14:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.433 14:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.433 14:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.433 14:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:28.433 14:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.433 14:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:28.433 14:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:28.433 { 00:13:28.433 "auth": { 00:13:28.433 "dhgroup": "ffdhe4096", 00:13:28.433 "digest": "sha384", 00:13:28.433 "state": "completed" 00:13:28.433 }, 00:13:28.433 "cntlid": 79, 00:13:28.433 "listen_address": { 00:13:28.433 "adrfam": "IPv4", 00:13:28.433 "traddr": "10.0.0.2", 00:13:28.433 "trsvcid": "4420", 00:13:28.433 "trtype": "TCP" 00:13:28.433 }, 00:13:28.433 "peer_address": { 00:13:28.433 "adrfam": "IPv4", 00:13:28.433 "traddr": "10.0.0.1", 00:13:28.433 "trsvcid": "50412", 00:13:28.433 "trtype": "TCP" 00:13:28.433 }, 00:13:28.433 "qid": 0, 00:13:28.433 "state": "enabled", 00:13:28.433 "thread": "nvmf_tgt_poll_group_000" 00:13:28.433 } 00:13:28.433 ]' 00:13:28.433 14:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:28.691 14:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:28.691 14:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:28.691 14:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:28.691 14:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:28.691 14:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.691 14:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.691 14:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.949 14:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.882 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.447 00:13:30.447 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:30.447 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:30.447 14:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:30.705 { 00:13:30.705 "auth": { 00:13:30.705 "dhgroup": "ffdhe6144", 00:13:30.705 "digest": "sha384", 00:13:30.705 "state": "completed" 00:13:30.705 }, 00:13:30.705 "cntlid": 81, 00:13:30.705 "listen_address": { 00:13:30.705 "adrfam": "IPv4", 00:13:30.705 "traddr": "10.0.0.2", 00:13:30.705 "trsvcid": "4420", 00:13:30.705 "trtype": "TCP" 00:13:30.705 }, 00:13:30.705 "peer_address": { 00:13:30.705 "adrfam": "IPv4", 00:13:30.705 "traddr": "10.0.0.1", 00:13:30.705 "trsvcid": "50438", 00:13:30.705 "trtype": "TCP" 00:13:30.705 }, 00:13:30.705 "qid": 0, 00:13:30.705 "state": "enabled", 00:13:30.705 "thread": "nvmf_tgt_poll_group_000" 00:13:30.705 } 00:13:30.705 ]' 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.705 14:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.275 14:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:13:31.840 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.840 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:31.840 14:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.840 14:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.840 14:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.840 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:31.840 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:31.840 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:32.097 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:13:32.097 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:32.097 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:32.097 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:32.097 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:32.097 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.097 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.097 14:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.097 14:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.097 14:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.097 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.097 14:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.662 00:13:32.662 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:32.662 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.662 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:32.921 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.921 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.921 14:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.921 14:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.921 14:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.921 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:32.921 { 00:13:32.921 "auth": { 00:13:32.921 "dhgroup": "ffdhe6144", 00:13:32.921 "digest": "sha384", 00:13:32.921 "state": "completed" 00:13:32.921 }, 00:13:32.921 "cntlid": 83, 00:13:32.921 "listen_address": { 00:13:32.921 "adrfam": "IPv4", 00:13:32.921 "traddr": "10.0.0.2", 00:13:32.921 "trsvcid": "4420", 00:13:32.921 "trtype": "TCP" 00:13:32.921 }, 00:13:32.921 "peer_address": { 00:13:32.921 "adrfam": "IPv4", 00:13:32.921 "traddr": "10.0.0.1", 00:13:32.921 "trsvcid": "39818", 00:13:32.921 "trtype": "TCP" 00:13:32.921 }, 00:13:32.921 "qid": 0, 00:13:32.921 "state": "enabled", 00:13:32.921 "thread": "nvmf_tgt_poll_group_000" 00:13:32.921 } 00:13:32.921 ]' 00:13:32.921 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:32.921 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:32.921 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:33.180 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:33.180 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:33.180 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.180 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.180 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.439 14:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.373 14:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.939 00:13:34.939 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.939 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.939 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.198 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.198 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.198 14:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.198 14:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.198 14:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.198 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:35.198 { 00:13:35.198 "auth": { 00:13:35.198 "dhgroup": "ffdhe6144", 00:13:35.198 "digest": "sha384", 00:13:35.198 "state": "completed" 00:13:35.198 }, 00:13:35.198 "cntlid": 85, 00:13:35.198 "listen_address": { 00:13:35.198 "adrfam": "IPv4", 00:13:35.198 "traddr": "10.0.0.2", 00:13:35.198 "trsvcid": "4420", 00:13:35.198 "trtype": "TCP" 00:13:35.198 }, 00:13:35.198 "peer_address": { 00:13:35.198 "adrfam": "IPv4", 00:13:35.198 "traddr": "10.0.0.1", 00:13:35.198 "trsvcid": "39856", 00:13:35.198 "trtype": "TCP" 00:13:35.198 }, 00:13:35.198 "qid": 0, 00:13:35.198 "state": "enabled", 00:13:35.198 "thread": "nvmf_tgt_poll_group_000" 00:13:35.198 } 00:13:35.198 ]' 00:13:35.198 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:35.198 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:35.198 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:35.198 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:35.198 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:35.456 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.456 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.456 14:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.715 14:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:13:36.283 14:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.283 14:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:36.283 14:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.283 14:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.283 14:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.283 14:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:36.283 14:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:36.283 14:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:36.541 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:13:36.541 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:36.541 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:36.542 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:36.542 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:36.542 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.542 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:13:36.542 14:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.542 14:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.542 14:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.542 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.542 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:37.108 00:13:37.108 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:37.108 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:37.108 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.366 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.367 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.367 14:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.367 14:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.367 14:30:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.367 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:37.367 { 00:13:37.367 "auth": { 00:13:37.367 "dhgroup": "ffdhe6144", 00:13:37.367 "digest": "sha384", 00:13:37.367 "state": "completed" 00:13:37.367 }, 00:13:37.367 "cntlid": 87, 00:13:37.367 "listen_address": { 00:13:37.367 "adrfam": "IPv4", 00:13:37.367 "traddr": "10.0.0.2", 00:13:37.367 "trsvcid": "4420", 00:13:37.367 "trtype": "TCP" 00:13:37.367 }, 00:13:37.367 "peer_address": { 00:13:37.367 "adrfam": "IPv4", 00:13:37.367 "traddr": "10.0.0.1", 00:13:37.367 "trsvcid": "39880", 00:13:37.367 "trtype": "TCP" 00:13:37.367 }, 00:13:37.367 "qid": 0, 00:13:37.367 "state": "enabled", 00:13:37.367 "thread": "nvmf_tgt_poll_group_000" 00:13:37.367 } 00:13:37.367 ]' 00:13:37.367 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:37.367 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:37.367 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:37.625 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:37.625 14:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:37.625 14:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.625 14:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.625 14:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.883 14:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.819 14:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.753 00:13:39.753 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:39.753 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.753 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:39.753 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.753 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.753 14:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.753 14:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.754 14:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.754 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:39.754 { 00:13:39.754 "auth": { 00:13:39.754 "dhgroup": "ffdhe8192", 00:13:39.754 "digest": "sha384", 00:13:39.754 "state": "completed" 00:13:39.754 }, 00:13:39.754 "cntlid": 89, 00:13:39.754 "listen_address": { 00:13:39.754 "adrfam": "IPv4", 00:13:39.754 "traddr": "10.0.0.2", 00:13:39.754 "trsvcid": "4420", 00:13:39.754 "trtype": "TCP" 00:13:39.754 }, 00:13:39.754 "peer_address": { 00:13:39.754 "adrfam": "IPv4", 00:13:39.754 "traddr": "10.0.0.1", 00:13:39.754 "trsvcid": "39906", 00:13:39.754 "trtype": "TCP" 00:13:39.754 }, 00:13:39.754 "qid": 0, 00:13:39.754 "state": "enabled", 00:13:39.754 "thread": "nvmf_tgt_poll_group_000" 00:13:39.754 } 00:13:39.754 ]' 00:13:39.754 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:39.754 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:39.754 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:40.012 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:40.012 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:40.012 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.012 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.012 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.270 14:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:13:40.837 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.837 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:40.837 14:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.837 14:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.837 14:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.837 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.837 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:40.837 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:41.445 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:13:41.445 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:41.445 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:41.445 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:41.445 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:41.445 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.445 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.445 14:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.445 14:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.445 14:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.446 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.446 14:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.011 00:13:42.011 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:42.011 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:42.011 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:42.270 { 00:13:42.270 "auth": { 00:13:42.270 "dhgroup": "ffdhe8192", 00:13:42.270 "digest": "sha384", 00:13:42.270 "state": "completed" 00:13:42.270 }, 00:13:42.270 "cntlid": 91, 00:13:42.270 "listen_address": { 00:13:42.270 "adrfam": "IPv4", 00:13:42.270 "traddr": "10.0.0.2", 00:13:42.270 "trsvcid": "4420", 00:13:42.270 "trtype": "TCP" 00:13:42.270 }, 00:13:42.270 "peer_address": { 00:13:42.270 "adrfam": "IPv4", 00:13:42.270 "traddr": "10.0.0.1", 00:13:42.270 "trsvcid": "59376", 00:13:42.270 "trtype": "TCP" 00:13:42.270 }, 00:13:42.270 "qid": 0, 00:13:42.270 "state": "enabled", 00:13:42.270 "thread": "nvmf_tgt_poll_group_000" 00:13:42.270 } 00:13:42.270 ]' 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.270 14:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.530 14:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:13:43.465 14:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.465 14:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:43.465 14:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.465 14:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.465 14:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.465 14:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:43.465 14:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:43.465 14:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:43.731 14:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:13:43.731 14:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:43.731 14:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:43.731 14:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:43.731 14:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:43.731 14:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.731 14:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.731 14:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.731 14:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.731 14:30:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.731 14:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.731 14:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.303 00:13:44.303 14:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:44.303 14:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:44.303 14:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.560 14:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.560 14:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.560 14:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.560 14:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.560 14:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.560 14:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:44.560 { 00:13:44.560 "auth": { 00:13:44.560 "dhgroup": "ffdhe8192", 00:13:44.560 "digest": "sha384", 00:13:44.560 "state": "completed" 00:13:44.560 }, 00:13:44.560 "cntlid": 93, 00:13:44.560 "listen_address": { 00:13:44.560 "adrfam": "IPv4", 00:13:44.560 "traddr": "10.0.0.2", 00:13:44.560 "trsvcid": "4420", 00:13:44.560 "trtype": "TCP" 00:13:44.560 }, 00:13:44.560 "peer_address": { 00:13:44.560 "adrfam": "IPv4", 00:13:44.560 "traddr": "10.0.0.1", 00:13:44.560 "trsvcid": "59384", 00:13:44.560 "trtype": "TCP" 00:13:44.560 }, 00:13:44.560 "qid": 0, 00:13:44.560 "state": "enabled", 00:13:44.560 "thread": "nvmf_tgt_poll_group_000" 00:13:44.560 } 00:13:44.560 ]' 00:13:44.560 14:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:44.560 14:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:44.560 14:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:44.817 14:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:44.818 14:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:44.818 14:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.818 14:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.818 14:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.075 14:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:13:45.641 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.641 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:45.641 14:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.641 14:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.900 14:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.901 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:45.901 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:45.901 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:45.901 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:13:45.901 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:45.901 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:45.901 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:45.901 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:45.901 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.901 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:13:45.901 14:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.901 14:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.158 14:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.158 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:46.158 14:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:46.725 00:13:46.725 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:46.726 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.726 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:46.985 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.985 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.985 14:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.985 14:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.985 14:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.985 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:46.985 { 00:13:46.985 "auth": { 00:13:46.985 "dhgroup": "ffdhe8192", 00:13:46.985 "digest": "sha384", 00:13:46.985 "state": "completed" 00:13:46.985 }, 00:13:46.985 "cntlid": 95, 00:13:46.985 "listen_address": { 00:13:46.985 "adrfam": "IPv4", 00:13:46.985 "traddr": "10.0.0.2", 00:13:46.985 "trsvcid": "4420", 00:13:46.985 "trtype": "TCP" 00:13:46.985 }, 00:13:46.985 "peer_address": { 00:13:46.985 "adrfam": "IPv4", 00:13:46.985 "traddr": "10.0.0.1", 00:13:46.985 "trsvcid": "59400", 00:13:46.985 "trtype": "TCP" 00:13:46.985 }, 00:13:46.985 "qid": 0, 00:13:46.985 "state": "enabled", 00:13:46.985 "thread": "nvmf_tgt_poll_group_000" 00:13:46.985 } 00:13:46.985 ]' 00:13:46.985 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:46.985 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:46.985 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:47.243 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:47.243 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:47.243 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.243 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.243 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.502 14:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:13:48.070 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.070 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:48.070 14:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.070 14:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.070 14:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.070 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:48.070 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:48.070 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.070 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:48.070 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:48.637 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:13:48.637 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.637 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:48.637 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:48.637 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:48.637 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.637 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.637 14:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.637 14:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.637 14:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.637 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.637 14:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.895 00:13:48.895 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:48.895 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:48.895 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.155 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.155 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.155 14:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.155 14:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.155 14:30:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.155 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.155 { 00:13:49.155 "auth": { 00:13:49.155 "dhgroup": "null", 00:13:49.155 "digest": "sha512", 00:13:49.155 "state": "completed" 00:13:49.155 }, 00:13:49.155 "cntlid": 97, 00:13:49.155 "listen_address": { 00:13:49.155 "adrfam": "IPv4", 00:13:49.155 "traddr": "10.0.0.2", 00:13:49.155 "trsvcid": "4420", 00:13:49.155 "trtype": "TCP" 00:13:49.155 }, 00:13:49.155 "peer_address": { 00:13:49.155 "adrfam": "IPv4", 00:13:49.155 "traddr": "10.0.0.1", 00:13:49.155 "trsvcid": "59418", 00:13:49.155 "trtype": "TCP" 00:13:49.155 }, 00:13:49.155 "qid": 0, 00:13:49.155 "state": "enabled", 00:13:49.155 "thread": "nvmf_tgt_poll_group_000" 00:13:49.155 } 00:13:49.155 ]' 00:13:49.155 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.155 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:49.155 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:49.413 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:49.413 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.413 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.413 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.413 14:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.671 14:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:13:50.606 14:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.606 14:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:50.606 14:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.606 14:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.606 14:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.606 14:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.606 14:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:50.606 14:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:50.606 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:13:50.606 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:50.606 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:50.606 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:50.606 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:50.606 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.606 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.606 14:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.606 14:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.606 14:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.606 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.606 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.172 00:13:51.172 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:51.172 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:51.172 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.430 { 00:13:51.430 "auth": { 00:13:51.430 "dhgroup": "null", 00:13:51.430 "digest": "sha512", 00:13:51.430 "state": "completed" 00:13:51.430 }, 00:13:51.430 "cntlid": 99, 00:13:51.430 "listen_address": { 00:13:51.430 "adrfam": "IPv4", 00:13:51.430 "traddr": "10.0.0.2", 00:13:51.430 "trsvcid": "4420", 00:13:51.430 "trtype": "TCP" 00:13:51.430 }, 00:13:51.430 "peer_address": { 00:13:51.430 "adrfam": "IPv4", 00:13:51.430 "traddr": "10.0.0.1", 00:13:51.430 "trsvcid": "59432", 00:13:51.430 "trtype": "TCP" 00:13:51.430 }, 00:13:51.430 "qid": 0, 00:13:51.430 "state": "enabled", 00:13:51.430 "thread": "nvmf_tgt_poll_group_000" 00:13:51.430 } 00:13:51.430 ]' 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.430 14:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.688 14:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:13:52.622 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.622 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:52.622 14:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.622 14:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.622 14:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.622 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:52.622 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:52.622 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:52.881 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:13:52.881 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:52.881 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:52.881 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:52.881 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:52.881 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.881 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.881 14:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.881 14:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.881 14:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.881 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.881 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.139 00:13:53.139 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:53.139 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:53.139 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.396 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.396 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.396 14:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.396 14:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.396 14:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.396 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:53.396 { 00:13:53.396 "auth": { 00:13:53.396 "dhgroup": "null", 00:13:53.396 "digest": "sha512", 00:13:53.396 "state": "completed" 00:13:53.396 }, 00:13:53.396 "cntlid": 101, 00:13:53.396 "listen_address": { 00:13:53.396 "adrfam": "IPv4", 00:13:53.396 "traddr": "10.0.0.2", 00:13:53.396 "trsvcid": "4420", 00:13:53.396 "trtype": "TCP" 00:13:53.396 }, 00:13:53.396 "peer_address": { 00:13:53.396 "adrfam": "IPv4", 00:13:53.396 "traddr": "10.0.0.1", 00:13:53.396 "trsvcid": "47330", 00:13:53.396 "trtype": "TCP" 00:13:53.396 }, 00:13:53.396 "qid": 0, 00:13:53.396 "state": "enabled", 00:13:53.396 "thread": "nvmf_tgt_poll_group_000" 00:13:53.396 } 00:13:53.396 ]' 00:13:53.396 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:53.396 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:53.396 14:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:53.655 14:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:53.655 14:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:53.655 14:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.655 14:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.655 14:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.912 14:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:13:54.844 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.844 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:54.844 14:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.844 14:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:54.845 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:55.102 00:13:55.102 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:55.102 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.102 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:55.360 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.360 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.360 14:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.360 14:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.360 14:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.360 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:55.360 { 00:13:55.360 "auth": { 00:13:55.360 "dhgroup": "null", 00:13:55.360 "digest": "sha512", 00:13:55.360 "state": "completed" 00:13:55.360 }, 00:13:55.360 "cntlid": 103, 00:13:55.360 "listen_address": { 00:13:55.360 "adrfam": "IPv4", 00:13:55.360 "traddr": "10.0.0.2", 00:13:55.360 "trsvcid": "4420", 00:13:55.360 "trtype": "TCP" 00:13:55.360 }, 00:13:55.360 "peer_address": { 00:13:55.360 "adrfam": "IPv4", 00:13:55.360 "traddr": "10.0.0.1", 00:13:55.360 "trsvcid": "47352", 00:13:55.360 "trtype": "TCP" 00:13:55.360 }, 00:13:55.360 "qid": 0, 00:13:55.360 "state": "enabled", 00:13:55.360 "thread": "nvmf_tgt_poll_group_000" 00:13:55.360 } 00:13:55.360 ]' 00:13:55.360 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:55.617 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:55.617 14:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:55.618 14:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:55.618 14:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:55.618 14:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.618 14:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.618 14:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.881 14:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:13:56.446 14:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.446 14:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:56.446 14:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.446 14:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.446 14:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.446 14:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:56.446 14:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:56.446 14:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:56.446 14:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:56.704 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:13:56.704 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.704 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:56.704 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:56.704 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:56.704 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.704 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.704 14:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.704 14:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.704 14:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.704 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.704 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.962 00:13:56.962 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.962 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.962 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.220 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.220 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.220 14:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.220 14:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.220 14:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.220 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:57.220 { 00:13:57.220 "auth": { 00:13:57.220 "dhgroup": "ffdhe2048", 00:13:57.220 "digest": "sha512", 00:13:57.220 "state": "completed" 00:13:57.220 }, 00:13:57.220 "cntlid": 105, 00:13:57.220 "listen_address": { 00:13:57.220 "adrfam": "IPv4", 00:13:57.220 "traddr": "10.0.0.2", 00:13:57.220 "trsvcid": "4420", 00:13:57.220 "trtype": "TCP" 00:13:57.220 }, 00:13:57.220 "peer_address": { 00:13:57.220 "adrfam": "IPv4", 00:13:57.220 "traddr": "10.0.0.1", 00:13:57.220 "trsvcid": "47370", 00:13:57.220 "trtype": "TCP" 00:13:57.220 }, 00:13:57.220 "qid": 0, 00:13:57.220 "state": "enabled", 00:13:57.220 "thread": "nvmf_tgt_poll_group_000" 00:13:57.220 } 00:13:57.220 ]' 00:13:57.220 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:57.220 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:57.220 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.478 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:57.478 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.478 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.478 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.478 14:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.735 14:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:13:58.301 14:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.301 14:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:13:58.301 14:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.301 14:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.559 14:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.559 14:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.559 14:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:58.559 14:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:58.559 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:13:58.559 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.559 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:58.559 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:58.559 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:58.559 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.559 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.559 14:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.559 14:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.559 14:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.559 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.559 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.126 00:13:59.126 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.126 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.126 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.385 { 00:13:59.385 "auth": { 00:13:59.385 "dhgroup": "ffdhe2048", 00:13:59.385 "digest": "sha512", 00:13:59.385 "state": "completed" 00:13:59.385 }, 00:13:59.385 "cntlid": 107, 00:13:59.385 "listen_address": { 00:13:59.385 "adrfam": "IPv4", 00:13:59.385 "traddr": "10.0.0.2", 00:13:59.385 "trsvcid": "4420", 00:13:59.385 "trtype": "TCP" 00:13:59.385 }, 00:13:59.385 "peer_address": { 00:13:59.385 "adrfam": "IPv4", 00:13:59.385 "traddr": "10.0.0.1", 00:13:59.385 "trsvcid": "47388", 00:13:59.385 "trtype": "TCP" 00:13:59.385 }, 00:13:59.385 "qid": 0, 00:13:59.385 "state": "enabled", 00:13:59.385 "thread": "nvmf_tgt_poll_group_000" 00:13:59.385 } 00:13:59.385 ]' 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.385 14:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.644 14:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:14:00.580 14:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.580 14:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:00.580 14:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.580 14:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.580 14:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.580 14:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.580 14:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:00.580 14:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:00.580 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:14:00.580 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.580 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:00.580 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:00.580 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:00.580 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.580 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.580 14:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.580 14:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.580 14:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.580 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.580 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.146 00:14:01.146 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.146 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.146 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.404 { 00:14:01.404 "auth": { 00:14:01.404 "dhgroup": "ffdhe2048", 00:14:01.404 "digest": "sha512", 00:14:01.404 "state": "completed" 00:14:01.404 }, 00:14:01.404 "cntlid": 109, 00:14:01.404 "listen_address": { 00:14:01.404 "adrfam": "IPv4", 00:14:01.404 "traddr": "10.0.0.2", 00:14:01.404 "trsvcid": "4420", 00:14:01.404 "trtype": "TCP" 00:14:01.404 }, 00:14:01.404 "peer_address": { 00:14:01.404 "adrfam": "IPv4", 00:14:01.404 "traddr": "10.0.0.1", 00:14:01.404 "trsvcid": "47432", 00:14:01.404 "trtype": "TCP" 00:14:01.404 }, 00:14:01.404 "qid": 0, 00:14:01.404 "state": "enabled", 00:14:01.404 "thread": "nvmf_tgt_poll_group_000" 00:14:01.404 } 00:14:01.404 ]' 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.404 14:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.673 14:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:14:02.609 14:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.609 14:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:02.609 14:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.609 14:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.609 14:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.609 14:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.609 14:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:02.609 14:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:02.609 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:14:02.609 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.609 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:02.609 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:02.609 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:02.609 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.609 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:14:02.609 14:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.609 14:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.609 14:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.609 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.609 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:03.174 00:14:03.174 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.174 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.174 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:03.432 { 00:14:03.432 "auth": { 00:14:03.432 "dhgroup": "ffdhe2048", 00:14:03.432 "digest": "sha512", 00:14:03.432 "state": "completed" 00:14:03.432 }, 00:14:03.432 "cntlid": 111, 00:14:03.432 "listen_address": { 00:14:03.432 "adrfam": "IPv4", 00:14:03.432 "traddr": "10.0.0.2", 00:14:03.432 "trsvcid": "4420", 00:14:03.432 "trtype": "TCP" 00:14:03.432 }, 00:14:03.432 "peer_address": { 00:14:03.432 "adrfam": "IPv4", 00:14:03.432 "traddr": "10.0.0.1", 00:14:03.432 "trsvcid": "56482", 00:14:03.432 "trtype": "TCP" 00:14:03.432 }, 00:14:03.432 "qid": 0, 00:14:03.432 "state": "enabled", 00:14:03.432 "thread": "nvmf_tgt_poll_group_000" 00:14:03.432 } 00:14:03.432 ]' 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.432 14:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.690 14:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:14:04.623 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.623 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:04.623 14:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.623 14:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.623 14:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.623 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.623 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.623 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:04.623 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:04.881 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:14:04.881 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:04.881 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:04.881 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:04.881 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:04.881 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.881 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.881 14:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.881 14:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.881 14:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.881 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.881 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.139 00:14:05.139 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.139 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.139 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.397 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.397 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.397 14:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.397 14:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.397 14:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.397 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.397 { 00:14:05.397 "auth": { 00:14:05.397 "dhgroup": "ffdhe3072", 00:14:05.397 "digest": "sha512", 00:14:05.397 "state": "completed" 00:14:05.397 }, 00:14:05.397 "cntlid": 113, 00:14:05.397 "listen_address": { 00:14:05.397 "adrfam": "IPv4", 00:14:05.397 "traddr": "10.0.0.2", 00:14:05.397 "trsvcid": "4420", 00:14:05.397 "trtype": "TCP" 00:14:05.397 }, 00:14:05.397 "peer_address": { 00:14:05.397 "adrfam": "IPv4", 00:14:05.397 "traddr": "10.0.0.1", 00:14:05.397 "trsvcid": "56504", 00:14:05.397 "trtype": "TCP" 00:14:05.397 }, 00:14:05.397 "qid": 0, 00:14:05.397 "state": "enabled", 00:14:05.397 "thread": "nvmf_tgt_poll_group_000" 00:14:05.397 } 00:14:05.397 ]' 00:14:05.397 14:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.655 14:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:05.655 14:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.655 14:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:05.655 14:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.655 14:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.655 14:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.655 14:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.913 14:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.847 14:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.106 14:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.106 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.106 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.364 00:14:07.364 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.364 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.364 14:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.621 14:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.621 14:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.621 14:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.621 14:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.621 14:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.621 14:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.621 { 00:14:07.621 "auth": { 00:14:07.621 "dhgroup": "ffdhe3072", 00:14:07.621 "digest": "sha512", 00:14:07.621 "state": "completed" 00:14:07.621 }, 00:14:07.621 "cntlid": 115, 00:14:07.621 "listen_address": { 00:14:07.621 "adrfam": "IPv4", 00:14:07.621 "traddr": "10.0.0.2", 00:14:07.621 "trsvcid": "4420", 00:14:07.621 "trtype": "TCP" 00:14:07.621 }, 00:14:07.621 "peer_address": { 00:14:07.621 "adrfam": "IPv4", 00:14:07.621 "traddr": "10.0.0.1", 00:14:07.621 "trsvcid": "56538", 00:14:07.621 "trtype": "TCP" 00:14:07.621 }, 00:14:07.621 "qid": 0, 00:14:07.621 "state": "enabled", 00:14:07.621 "thread": "nvmf_tgt_poll_group_000" 00:14:07.621 } 00:14:07.621 ]' 00:14:07.621 14:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.621 14:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:07.621 14:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.621 14:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:07.621 14:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.880 14:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.880 14:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.880 14:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.137 14:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:14:08.702 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.702 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:08.702 14:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.703 14:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.703 14:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.703 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:08.703 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:08.703 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:08.960 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:14:08.961 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.961 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:08.961 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:08.961 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:08.961 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.961 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.961 14:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.961 14:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.961 14:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.961 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.961 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.219 00:14:09.477 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.477 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.477 14:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.735 14:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.736 14:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.736 14:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.736 14:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.736 14:30:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.736 14:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.736 { 00:14:09.736 "auth": { 00:14:09.736 "dhgroup": "ffdhe3072", 00:14:09.736 "digest": "sha512", 00:14:09.736 "state": "completed" 00:14:09.736 }, 00:14:09.736 "cntlid": 117, 00:14:09.736 "listen_address": { 00:14:09.736 "adrfam": "IPv4", 00:14:09.736 "traddr": "10.0.0.2", 00:14:09.736 "trsvcid": "4420", 00:14:09.736 "trtype": "TCP" 00:14:09.736 }, 00:14:09.736 "peer_address": { 00:14:09.736 "adrfam": "IPv4", 00:14:09.736 "traddr": "10.0.0.1", 00:14:09.736 "trsvcid": "56582", 00:14:09.736 "trtype": "TCP" 00:14:09.736 }, 00:14:09.736 "qid": 0, 00:14:09.736 "state": "enabled", 00:14:09.736 "thread": "nvmf_tgt_poll_group_000" 00:14:09.736 } 00:14:09.736 ]' 00:14:09.736 14:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.736 14:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:09.736 14:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.736 14:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:09.736 14:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.736 14:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.736 14:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.736 14:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.002 14:30:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:14:10.944 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.944 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:10.944 14:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.944 14:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.944 14:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.944 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.944 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:10.944 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:11.202 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:14:11.202 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:11.202 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:11.202 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:11.202 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:11.202 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.202 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:14:11.202 14:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.202 14:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.202 14:30:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.202 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:11.202 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:11.460 00:14:11.460 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.460 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.460 14:30:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.719 { 00:14:11.719 "auth": { 00:14:11.719 "dhgroup": "ffdhe3072", 00:14:11.719 "digest": "sha512", 00:14:11.719 "state": "completed" 00:14:11.719 }, 00:14:11.719 "cntlid": 119, 00:14:11.719 "listen_address": { 00:14:11.719 "adrfam": "IPv4", 00:14:11.719 "traddr": "10.0.0.2", 00:14:11.719 "trsvcid": "4420", 00:14:11.719 "trtype": "TCP" 00:14:11.719 }, 00:14:11.719 "peer_address": { 00:14:11.719 "adrfam": "IPv4", 00:14:11.719 "traddr": "10.0.0.1", 00:14:11.719 "trsvcid": "37116", 00:14:11.719 "trtype": "TCP" 00:14:11.719 }, 00:14:11.719 "qid": 0, 00:14:11.719 "state": "enabled", 00:14:11.719 "thread": "nvmf_tgt_poll_group_000" 00:14:11.719 } 00:14:11.719 ]' 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.719 14:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.286 14:30:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:14:12.853 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.853 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:12.853 14:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.853 14:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.853 14:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.853 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:12.853 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:12.853 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:12.853 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:13.112 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:14:13.112 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.112 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:13.112 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:13.112 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:13.112 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.112 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.112 14:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.112 14:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.112 14:30:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.112 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.112 14:30:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.679 00:14:13.679 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.679 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.679 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.937 { 00:14:13.937 "auth": { 00:14:13.937 "dhgroup": "ffdhe4096", 00:14:13.937 "digest": "sha512", 00:14:13.937 "state": "completed" 00:14:13.937 }, 00:14:13.937 "cntlid": 121, 00:14:13.937 "listen_address": { 00:14:13.937 "adrfam": "IPv4", 00:14:13.937 "traddr": "10.0.0.2", 00:14:13.937 "trsvcid": "4420", 00:14:13.937 "trtype": "TCP" 00:14:13.937 }, 00:14:13.937 "peer_address": { 00:14:13.937 "adrfam": "IPv4", 00:14:13.937 "traddr": "10.0.0.1", 00:14:13.937 "trsvcid": "37130", 00:14:13.937 "trtype": "TCP" 00:14:13.937 }, 00:14:13.937 "qid": 0, 00:14:13.937 "state": "enabled", 00:14:13.937 "thread": "nvmf_tgt_poll_group_000" 00:14:13.937 } 00:14:13.937 ]' 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.937 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.502 14:30:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:14:15.065 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.065 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:15.065 14:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.065 14:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.065 14:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.065 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.065 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:15.065 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:15.322 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:14:15.322 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.322 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:15.322 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:15.322 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:15.322 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.322 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.322 14:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.322 14:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.322 14:30:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.322 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.322 14:30:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.885 00:14:15.885 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.885 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.885 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.142 { 00:14:16.142 "auth": { 00:14:16.142 "dhgroup": "ffdhe4096", 00:14:16.142 "digest": "sha512", 00:14:16.142 "state": "completed" 00:14:16.142 }, 00:14:16.142 "cntlid": 123, 00:14:16.142 "listen_address": { 00:14:16.142 "adrfam": "IPv4", 00:14:16.142 "traddr": "10.0.0.2", 00:14:16.142 "trsvcid": "4420", 00:14:16.142 "trtype": "TCP" 00:14:16.142 }, 00:14:16.142 "peer_address": { 00:14:16.142 "adrfam": "IPv4", 00:14:16.142 "traddr": "10.0.0.1", 00:14:16.142 "trsvcid": "37164", 00:14:16.142 "trtype": "TCP" 00:14:16.142 }, 00:14:16.142 "qid": 0, 00:14:16.142 "state": "enabled", 00:14:16.142 "thread": "nvmf_tgt_poll_group_000" 00:14:16.142 } 00:14:16.142 ]' 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.142 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.399 14:30:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:14:17.332 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.332 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:17.332 14:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.332 14:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.332 14:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.332 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.332 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:17.332 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:17.590 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:14:17.590 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.590 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:17.590 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:17.590 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:17.590 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.590 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.590 14:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.590 14:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.590 14:30:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.590 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.590 14:30:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.848 00:14:17.848 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.848 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.848 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.106 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.106 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.106 14:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.106 14:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.106 14:30:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.106 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.106 { 00:14:18.106 "auth": { 00:14:18.106 "dhgroup": "ffdhe4096", 00:14:18.106 "digest": "sha512", 00:14:18.106 "state": "completed" 00:14:18.106 }, 00:14:18.106 "cntlid": 125, 00:14:18.106 "listen_address": { 00:14:18.106 "adrfam": "IPv4", 00:14:18.106 "traddr": "10.0.0.2", 00:14:18.106 "trsvcid": "4420", 00:14:18.106 "trtype": "TCP" 00:14:18.106 }, 00:14:18.106 "peer_address": { 00:14:18.106 "adrfam": "IPv4", 00:14:18.106 "traddr": "10.0.0.1", 00:14:18.106 "trsvcid": "37190", 00:14:18.106 "trtype": "TCP" 00:14:18.106 }, 00:14:18.106 "qid": 0, 00:14:18.106 "state": "enabled", 00:14:18.106 "thread": "nvmf_tgt_poll_group_000" 00:14:18.106 } 00:14:18.106 ]' 00:14:18.106 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.106 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:18.106 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.364 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:18.364 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.364 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.364 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.365 14:30:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.624 14:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:14:19.262 14:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.262 14:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:19.262 14:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.262 14:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.262 14:30:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.262 14:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.262 14:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:19.262 14:30:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:19.521 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:14:19.521 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.522 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:19.522 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:19.522 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:19.522 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.522 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:14:19.522 14:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.522 14:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.522 14:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.522 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:19.522 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:20.088 00:14:20.088 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.088 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.088 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.347 { 00:14:20.347 "auth": { 00:14:20.347 "dhgroup": "ffdhe4096", 00:14:20.347 "digest": "sha512", 00:14:20.347 "state": "completed" 00:14:20.347 }, 00:14:20.347 "cntlid": 127, 00:14:20.347 "listen_address": { 00:14:20.347 "adrfam": "IPv4", 00:14:20.347 "traddr": "10.0.0.2", 00:14:20.347 "trsvcid": "4420", 00:14:20.347 "trtype": "TCP" 00:14:20.347 }, 00:14:20.347 "peer_address": { 00:14:20.347 "adrfam": "IPv4", 00:14:20.347 "traddr": "10.0.0.1", 00:14:20.347 "trsvcid": "37216", 00:14:20.347 "trtype": "TCP" 00:14:20.347 }, 00:14:20.347 "qid": 0, 00:14:20.347 "state": "enabled", 00:14:20.347 "thread": "nvmf_tgt_poll_group_000" 00:14:20.347 } 00:14:20.347 ]' 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.347 14:30:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.605 14:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:14:21.577 14:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.577 14:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:21.577 14:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.577 14:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.577 14:31:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.577 14:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:21.577 14:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.577 14:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:21.577 14:31:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:21.835 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:14:21.835 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.835 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:21.835 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:21.835 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:21.835 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.835 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.835 14:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.835 14:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.835 14:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.835 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.835 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.400 00:14:22.400 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.400 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.400 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.400 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.400 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.400 14:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.400 14:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.400 14:31:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.400 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.400 { 00:14:22.400 "auth": { 00:14:22.400 "dhgroup": "ffdhe6144", 00:14:22.400 "digest": "sha512", 00:14:22.400 "state": "completed" 00:14:22.400 }, 00:14:22.400 "cntlid": 129, 00:14:22.400 "listen_address": { 00:14:22.400 "adrfam": "IPv4", 00:14:22.400 "traddr": "10.0.0.2", 00:14:22.400 "trsvcid": "4420", 00:14:22.400 "trtype": "TCP" 00:14:22.400 }, 00:14:22.400 "peer_address": { 00:14:22.400 "adrfam": "IPv4", 00:14:22.400 "traddr": "10.0.0.1", 00:14:22.400 "trsvcid": "52684", 00:14:22.400 "trtype": "TCP" 00:14:22.400 }, 00:14:22.400 "qid": 0, 00:14:22.400 "state": "enabled", 00:14:22.400 "thread": "nvmf_tgt_poll_group_000" 00:14:22.400 } 00:14:22.400 ]' 00:14:22.400 14:31:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.658 14:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:22.658 14:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.658 14:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:22.658 14:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.658 14:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.658 14:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.658 14:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.916 14:31:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:14:23.850 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.850 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:23.850 14:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.850 14:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.850 14:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.850 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.850 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:23.850 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:24.109 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:14:24.109 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.109 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:24.109 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:24.109 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:24.109 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.109 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.109 14:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.109 14:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.109 14:31:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.109 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.109 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.367 00:14:24.368 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.368 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.368 14:31:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.932 14:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.932 14:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.933 14:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.933 14:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.933 14:31:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.933 14:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.933 { 00:14:24.933 "auth": { 00:14:24.933 "dhgroup": "ffdhe6144", 00:14:24.933 "digest": "sha512", 00:14:24.933 "state": "completed" 00:14:24.933 }, 00:14:24.933 "cntlid": 131, 00:14:24.933 "listen_address": { 00:14:24.933 "adrfam": "IPv4", 00:14:24.933 "traddr": "10.0.0.2", 00:14:24.933 "trsvcid": "4420", 00:14:24.933 "trtype": "TCP" 00:14:24.933 }, 00:14:24.933 "peer_address": { 00:14:24.933 "adrfam": "IPv4", 00:14:24.933 "traddr": "10.0.0.1", 00:14:24.933 "trsvcid": "52708", 00:14:24.933 "trtype": "TCP" 00:14:24.933 }, 00:14:24.933 "qid": 0, 00:14:24.933 "state": "enabled", 00:14:24.933 "thread": "nvmf_tgt_poll_group_000" 00:14:24.933 } 00:14:24.933 ]' 00:14:24.933 14:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.933 14:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:24.933 14:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.933 14:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:24.933 14:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.933 14:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.933 14:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.933 14:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.272 14:31:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.206 14:31:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.773 00:14:26.773 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.773 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.773 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.031 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.031 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.031 14:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.031 14:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.031 14:31:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.031 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.031 { 00:14:27.031 "auth": { 00:14:27.031 "dhgroup": "ffdhe6144", 00:14:27.031 "digest": "sha512", 00:14:27.031 "state": "completed" 00:14:27.031 }, 00:14:27.031 "cntlid": 133, 00:14:27.031 "listen_address": { 00:14:27.031 "adrfam": "IPv4", 00:14:27.031 "traddr": "10.0.0.2", 00:14:27.031 "trsvcid": "4420", 00:14:27.031 "trtype": "TCP" 00:14:27.031 }, 00:14:27.031 "peer_address": { 00:14:27.031 "adrfam": "IPv4", 00:14:27.031 "traddr": "10.0.0.1", 00:14:27.031 "trsvcid": "52726", 00:14:27.031 "trtype": "TCP" 00:14:27.031 }, 00:14:27.031 "qid": 0, 00:14:27.031 "state": "enabled", 00:14:27.031 "thread": "nvmf_tgt_poll_group_000" 00:14:27.031 } 00:14:27.031 ]' 00:14:27.031 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.031 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:27.031 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.031 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:27.031 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.290 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.290 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.290 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.548 14:31:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.481 14:31:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.481 14:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.481 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.481 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.046 00:14:29.046 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.046 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.046 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.304 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.304 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.304 14:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.304 14:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.304 14:31:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.304 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.304 { 00:14:29.304 "auth": { 00:14:29.304 "dhgroup": "ffdhe6144", 00:14:29.304 "digest": "sha512", 00:14:29.304 "state": "completed" 00:14:29.304 }, 00:14:29.304 "cntlid": 135, 00:14:29.304 "listen_address": { 00:14:29.304 "adrfam": "IPv4", 00:14:29.304 "traddr": "10.0.0.2", 00:14:29.304 "trsvcid": "4420", 00:14:29.304 "trtype": "TCP" 00:14:29.304 }, 00:14:29.304 "peer_address": { 00:14:29.304 "adrfam": "IPv4", 00:14:29.304 "traddr": "10.0.0.1", 00:14:29.304 "trsvcid": "52746", 00:14:29.304 "trtype": "TCP" 00:14:29.304 }, 00:14:29.304 "qid": 0, 00:14:29.304 "state": "enabled", 00:14:29.304 "thread": "nvmf_tgt_poll_group_000" 00:14:29.304 } 00:14:29.304 ]' 00:14:29.304 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.304 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:29.304 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.304 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:29.304 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.561 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.561 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.561 14:31:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.819 14:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:14:30.385 14:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.385 14:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:30.385 14:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.385 14:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.385 14:31:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.385 14:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.385 14:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.385 14:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:30.385 14:31:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:30.643 14:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:14:30.643 14:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.643 14:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:30.643 14:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:30.643 14:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:30.643 14:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.643 14:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.643 14:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.643 14:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.643 14:31:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.643 14:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.643 14:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.576 00:14:31.576 14:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.576 14:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.576 14:31:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.576 14:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.576 14:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.576 14:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.576 14:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.576 14:31:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.576 14:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.576 { 00:14:31.576 "auth": { 00:14:31.576 "dhgroup": "ffdhe8192", 00:14:31.576 "digest": "sha512", 00:14:31.576 "state": "completed" 00:14:31.576 }, 00:14:31.576 "cntlid": 137, 00:14:31.576 "listen_address": { 00:14:31.576 "adrfam": "IPv4", 00:14:31.576 "traddr": "10.0.0.2", 00:14:31.576 "trsvcid": "4420", 00:14:31.576 "trtype": "TCP" 00:14:31.576 }, 00:14:31.576 "peer_address": { 00:14:31.576 "adrfam": "IPv4", 00:14:31.576 "traddr": "10.0.0.1", 00:14:31.576 "trsvcid": "52768", 00:14:31.576 "trtype": "TCP" 00:14:31.576 }, 00:14:31.576 "qid": 0, 00:14:31.576 "state": "enabled", 00:14:31.576 "thread": "nvmf_tgt_poll_group_000" 00:14:31.576 } 00:14:31.576 ]' 00:14:31.576 14:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.868 14:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.868 14:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.868 14:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:31.868 14:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.868 14:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.868 14:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.868 14:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.131 14:31:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:14:32.715 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.715 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:32.715 14:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.715 14:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.715 14:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.715 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.716 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:32.716 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:33.283 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:14:33.283 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.283 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:33.283 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:33.283 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:33.283 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.283 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.283 14:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.283 14:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.283 14:31:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.283 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.283 14:31:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.848 00:14:33.848 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.848 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.848 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.105 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.105 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.105 14:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.105 14:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.105 14:31:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.105 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.105 { 00:14:34.105 "auth": { 00:14:34.105 "dhgroup": "ffdhe8192", 00:14:34.105 "digest": "sha512", 00:14:34.105 "state": "completed" 00:14:34.105 }, 00:14:34.105 "cntlid": 139, 00:14:34.105 "listen_address": { 00:14:34.105 "adrfam": "IPv4", 00:14:34.105 "traddr": "10.0.0.2", 00:14:34.105 "trsvcid": "4420", 00:14:34.105 "trtype": "TCP" 00:14:34.105 }, 00:14:34.105 "peer_address": { 00:14:34.105 "adrfam": "IPv4", 00:14:34.105 "traddr": "10.0.0.1", 00:14:34.105 "trsvcid": "49672", 00:14:34.105 "trtype": "TCP" 00:14:34.105 }, 00:14:34.105 "qid": 0, 00:14:34.105 "state": "enabled", 00:14:34.105 "thread": "nvmf_tgt_poll_group_000" 00:14:34.105 } 00:14:34.105 ]' 00:14:34.105 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.105 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:34.106 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.106 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:34.106 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.106 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.106 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.106 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.363 14:31:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:01:M2U0NDBiNjJjMDBhZjc1YTRkY2E5MmJmOGZlMmY3NmZJxKEq: --dhchap-ctrl-secret DHHC-1:02:NDhkY2IzYTc5ZjEyMGM3MDUyMDNjZTI0NGJmZmFiNjEyMTM5YzM5MGIyMDk3YTBmfrpr+Q==: 00:14:35.296 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.296 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:35.296 14:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.296 14:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.296 14:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.296 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.296 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:35.296 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:35.555 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:14:35.555 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.555 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:35.555 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:35.555 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:35.555 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.555 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.555 14:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.555 14:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.555 14:31:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.555 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.555 14:31:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.122 00:14:36.122 14:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.122 14:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.122 14:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.381 14:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.381 14:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.381 14:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.381 14:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.381 14:31:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.381 14:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.381 { 00:14:36.381 "auth": { 00:14:36.381 "dhgroup": "ffdhe8192", 00:14:36.381 "digest": "sha512", 00:14:36.381 "state": "completed" 00:14:36.381 }, 00:14:36.381 "cntlid": 141, 00:14:36.381 "listen_address": { 00:14:36.381 "adrfam": "IPv4", 00:14:36.381 "traddr": "10.0.0.2", 00:14:36.381 "trsvcid": "4420", 00:14:36.381 "trtype": "TCP" 00:14:36.381 }, 00:14:36.381 "peer_address": { 00:14:36.381 "adrfam": "IPv4", 00:14:36.381 "traddr": "10.0.0.1", 00:14:36.381 "trsvcid": "49698", 00:14:36.381 "trtype": "TCP" 00:14:36.381 }, 00:14:36.381 "qid": 0, 00:14:36.381 "state": "enabled", 00:14:36.381 "thread": "nvmf_tgt_poll_group_000" 00:14:36.381 } 00:14:36.381 ]' 00:14:36.381 14:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.640 14:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:36.640 14:31:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.640 14:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:36.640 14:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.640 14:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.640 14:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.640 14:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.899 14:31:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:02:ODY5YjFjMjYyNTY4MjkzNGUwYTliYzYxYTc2YzBiNjQ5OGE0ZWRkYWMxZDIwZGJmsk9BeQ==: --dhchap-ctrl-secret DHHC-1:01:NjNkODQ2YzM5NDVkY2ZhODQzMDA3YzQxMmZlNTNlYzgMIiNw: 00:14:37.466 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.725 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:37.725 14:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.725 14:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.725 14:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.725 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.725 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:37.725 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:37.983 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:14:37.983 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.983 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:37.983 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:37.983 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:37.983 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.983 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:14:37.983 14:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.983 14:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.983 14:31:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.983 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.983 14:31:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.549 00:14:38.549 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.549 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.549 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.807 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.807 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.807 14:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.807 14:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.807 14:31:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.807 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.807 { 00:14:38.807 "auth": { 00:14:38.807 "dhgroup": "ffdhe8192", 00:14:38.807 "digest": "sha512", 00:14:38.807 "state": "completed" 00:14:38.807 }, 00:14:38.807 "cntlid": 143, 00:14:38.807 "listen_address": { 00:14:38.807 "adrfam": "IPv4", 00:14:38.807 "traddr": "10.0.0.2", 00:14:38.807 "trsvcid": "4420", 00:14:38.807 "trtype": "TCP" 00:14:38.807 }, 00:14:38.807 "peer_address": { 00:14:38.807 "adrfam": "IPv4", 00:14:38.807 "traddr": "10.0.0.1", 00:14:38.807 "trsvcid": "49732", 00:14:38.807 "trtype": "TCP" 00:14:38.807 }, 00:14:38.807 "qid": 0, 00:14:38.807 "state": "enabled", 00:14:38.807 "thread": "nvmf_tgt_poll_group_000" 00:14:38.807 } 00:14:38.807 ]' 00:14:38.807 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.807 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:38.807 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.064 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:39.064 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.064 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.064 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.064 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.321 14:31:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:14:39.886 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.886 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:39.886 14:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.886 14:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.886 14:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.886 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:39.886 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:14:39.886 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:39.886 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:39.886 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:39.886 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:40.145 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:14:40.145 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.145 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:40.145 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:40.145 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:40.145 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.145 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.145 14:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.145 14:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.145 14:31:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.145 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.145 14:31:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.075 00:14:41.075 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.075 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.075 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.333 { 00:14:41.333 "auth": { 00:14:41.333 "dhgroup": "ffdhe8192", 00:14:41.333 "digest": "sha512", 00:14:41.333 "state": "completed" 00:14:41.333 }, 00:14:41.333 "cntlid": 145, 00:14:41.333 "listen_address": { 00:14:41.333 "adrfam": "IPv4", 00:14:41.333 "traddr": "10.0.0.2", 00:14:41.333 "trsvcid": "4420", 00:14:41.333 "trtype": "TCP" 00:14:41.333 }, 00:14:41.333 "peer_address": { 00:14:41.333 "adrfam": "IPv4", 00:14:41.333 "traddr": "10.0.0.1", 00:14:41.333 "trsvcid": "49766", 00:14:41.333 "trtype": "TCP" 00:14:41.333 }, 00:14:41.333 "qid": 0, 00:14:41.333 "state": "enabled", 00:14:41.333 "thread": "nvmf_tgt_poll_group_000" 00:14:41.333 } 00:14:41.333 ]' 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.333 14:31:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.589 14:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:00:ZmE5OGQyY2M3YjdhYmNkZmYzZDE3NjUxYWJjYWE0ODRmNTU1YmU4NWZhMTg0NzA1ROaDAA==: --dhchap-ctrl-secret DHHC-1:03:YWExZDYwZTRiZGJhNmI5NmY1MGRjOGY0ZWU4OTY2MGVkYzkyMmQxOThmYzhjM2NmYjU3ZWVlNjBjNWJjODBiYUc6dtc=: 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:42.521 14:31:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:43.088 2024/07/15 14:31:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:43.088 request: 00:14:43.088 { 00:14:43.088 "method": "bdev_nvme_attach_controller", 00:14:43.088 "params": { 00:14:43.088 "name": "nvme0", 00:14:43.088 "trtype": "tcp", 00:14:43.088 "traddr": "10.0.0.2", 00:14:43.088 "adrfam": "ipv4", 00:14:43.088 "trsvcid": "4420", 00:14:43.088 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:43.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95", 00:14:43.088 "prchk_reftag": false, 00:14:43.088 "prchk_guard": false, 00:14:43.088 "hdgst": false, 00:14:43.088 "ddgst": false, 00:14:43.088 "dhchap_key": "key2" 00:14:43.088 } 00:14:43.088 } 00:14:43.088 Got JSON-RPC error response 00:14:43.088 GoRPCClient: error on JSON-RPC call 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:43.088 14:31:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:43.655 2024/07/15 14:31:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:43.655 request: 00:14:43.655 { 00:14:43.655 "method": "bdev_nvme_attach_controller", 00:14:43.655 "params": { 00:14:43.655 "name": "nvme0", 00:14:43.655 "trtype": "tcp", 00:14:43.655 "traddr": "10.0.0.2", 00:14:43.655 "adrfam": "ipv4", 00:14:43.655 "trsvcid": "4420", 00:14:43.655 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:43.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95", 00:14:43.655 "prchk_reftag": false, 00:14:43.655 "prchk_guard": false, 00:14:43.655 "hdgst": false, 00:14:43.655 "ddgst": false, 00:14:43.655 "dhchap_key": "key1", 00:14:43.655 "dhchap_ctrlr_key": "ckey2" 00:14:43.655 } 00:14:43.655 } 00:14:43.655 Got JSON-RPC error response 00:14:43.655 GoRPCClient: error on JSON-RPC call 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key1 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.655 14:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.220 2024/07/15 14:31:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:44.220 request: 00:14:44.220 { 00:14:44.220 "method": "bdev_nvme_attach_controller", 00:14:44.220 "params": { 00:14:44.220 "name": "nvme0", 00:14:44.220 "trtype": "tcp", 00:14:44.220 "traddr": "10.0.0.2", 00:14:44.220 "adrfam": "ipv4", 00:14:44.220 "trsvcid": "4420", 00:14:44.220 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:44.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95", 00:14:44.220 "prchk_reftag": false, 00:14:44.220 "prchk_guard": false, 00:14:44.220 "hdgst": false, 00:14:44.220 "ddgst": false, 00:14:44.220 "dhchap_key": "key1", 00:14:44.220 "dhchap_ctrlr_key": "ckey1" 00:14:44.220 } 00:14:44.220 } 00:14:44.220 Got JSON-RPC error response 00:14:44.220 GoRPCClient: error on JSON-RPC call 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77895 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77895 ']' 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77895 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77895 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:44.220 killing process with pid 77895 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77895' 00:14:44.220 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77895 00:14:44.221 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77895 00:14:44.478 14:31:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:44.478 14:31:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:44.478 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:44.478 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.478 14:31:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82832 00:14:44.478 14:31:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:44.478 14:31:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82832 00:14:44.478 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82832 ']' 00:14:44.478 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.478 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.478 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.478 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.478 14:31:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.411 14:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.412 14:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:45.412 14:31:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.412 14:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:45.412 14:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.412 14:31:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.412 14:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:45.412 14:31:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82832 00:14:45.412 14:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82832 ']' 00:14:45.412 14:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.412 14:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.412 14:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.412 14:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.412 14:31:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.670 14:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.670 14:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:45.670 14:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:14:45.670 14:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.670 14:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.928 14:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.928 14:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:14:45.928 14:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.928 14:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:45.928 14:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:45.928 14:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:45.928 14:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.928 14:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:14:45.928 14:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.928 14:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.928 14:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.928 14:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:45.928 14:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.493 00:14:46.493 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.493 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.493 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.751 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.751 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.751 14:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.751 14:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.751 14:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.009 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.009 { 00:14:47.009 "auth": { 00:14:47.009 "dhgroup": "ffdhe8192", 00:14:47.009 "digest": "sha512", 00:14:47.009 "state": "completed" 00:14:47.009 }, 00:14:47.009 "cntlid": 1, 00:14:47.009 "listen_address": { 00:14:47.009 "adrfam": "IPv4", 00:14:47.009 "traddr": "10.0.0.2", 00:14:47.009 "trsvcid": "4420", 00:14:47.009 "trtype": "TCP" 00:14:47.009 }, 00:14:47.009 "peer_address": { 00:14:47.009 "adrfam": "IPv4", 00:14:47.009 "traddr": "10.0.0.1", 00:14:47.009 "trsvcid": "41168", 00:14:47.009 "trtype": "TCP" 00:14:47.009 }, 00:14:47.009 "qid": 0, 00:14:47.009 "state": "enabled", 00:14:47.009 "thread": "nvmf_tgt_poll_group_000" 00:14:47.009 } 00:14:47.009 ]' 00:14:47.009 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.009 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:47.009 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.009 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:47.009 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.009 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.009 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.009 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.267 14:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-secret DHHC-1:03:YWNhYWE5Y2QyNjU1ZmU1ZTg3NGNlOTcxZDFmODZjZTc0YTI3ZjIxNmYwMjU4MWUzYzUwMzMxYmNhM2VjMjQxMxCxR2s=: 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --dhchap-key key3 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.200 14:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:48.458 2024/07/15 14:31:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:48.458 request: 00:14:48.458 { 00:14:48.458 "method": "bdev_nvme_attach_controller", 00:14:48.458 "params": { 00:14:48.458 "name": "nvme0", 00:14:48.458 "trtype": "tcp", 00:14:48.458 "traddr": "10.0.0.2", 00:14:48.458 "adrfam": "ipv4", 00:14:48.458 "trsvcid": "4420", 00:14:48.458 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:48.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95", 00:14:48.458 "prchk_reftag": false, 00:14:48.458 "prchk_guard": false, 00:14:48.458 "hdgst": false, 00:14:48.458 "ddgst": false, 00:14:48.458 "dhchap_key": "key3" 00:14:48.458 } 00:14:48.458 } 00:14:48.458 Got JSON-RPC error response 00:14:48.458 GoRPCClient: error on JSON-RPC call 00:14:48.458 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:48.458 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:48.458 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:48.458 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:48.458 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:14:48.458 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:14:48.458 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:48.458 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:49.024 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.024 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:49.024 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.024 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:49.024 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.024 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:49.024 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.024 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.024 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:49.025 2024/07/15 14:31:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:49.025 request: 00:14:49.025 { 00:14:49.025 "method": "bdev_nvme_attach_controller", 00:14:49.025 "params": { 00:14:49.025 "name": "nvme0", 00:14:49.025 "trtype": "tcp", 00:14:49.025 "traddr": "10.0.0.2", 00:14:49.025 "adrfam": "ipv4", 00:14:49.025 "trsvcid": "4420", 00:14:49.025 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:49.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95", 00:14:49.025 "prchk_reftag": false, 00:14:49.025 "prchk_guard": false, 00:14:49.025 "hdgst": false, 00:14:49.025 "ddgst": false, 00:14:49.025 "dhchap_key": "key3" 00:14:49.025 } 00:14:49.025 } 00:14:49.025 Got JSON-RPC error response 00:14:49.025 GoRPCClient: error on JSON-RPC call 00:14:49.025 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:49.025 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:49.025 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:49.025 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:49.025 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:49.025 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:14:49.025 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:49.025 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:49.025 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:49.025 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:49.284 14:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:49.850 2024/07/15 14:31:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:49.850 request: 00:14:49.850 { 00:14:49.850 "method": "bdev_nvme_attach_controller", 00:14:49.850 "params": { 00:14:49.850 "name": "nvme0", 00:14:49.850 "trtype": "tcp", 00:14:49.850 "traddr": "10.0.0.2", 00:14:49.850 "adrfam": "ipv4", 00:14:49.850 "trsvcid": "4420", 00:14:49.850 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:49.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95", 00:14:49.850 "prchk_reftag": false, 00:14:49.850 "prchk_guard": false, 00:14:49.850 "hdgst": false, 00:14:49.850 "ddgst": false, 00:14:49.850 "dhchap_key": "key0", 00:14:49.850 "dhchap_ctrlr_key": "key1" 00:14:49.850 } 00:14:49.850 } 00:14:49.850 Got JSON-RPC error response 00:14:49.850 GoRPCClient: error on JSON-RPC call 00:14:49.850 14:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:49.850 14:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:49.850 14:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:49.850 14:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:49.850 14:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:49.850 14:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:50.108 00:14:50.108 14:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:14:50.108 14:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:14:50.108 14:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.366 14:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.366 14:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.366 14:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.625 14:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:14:50.625 14:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:14:50.625 14:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77939 00:14:50.625 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77939 ']' 00:14:50.625 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77939 00:14:50.625 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:50.625 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.625 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77939 00:14:50.625 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:50.625 killing process with pid 77939 00:14:50.625 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:50.625 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77939' 00:14:50.625 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77939 00:14:50.625 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77939 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:50.883 rmmod nvme_tcp 00:14:50.883 rmmod nvme_fabrics 00:14:50.883 rmmod nvme_keyring 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82832 ']' 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82832 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82832 ']' 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82832 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82832 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:50.883 killing process with pid 82832 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82832' 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82832 00:14:50.883 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82832 00:14:51.141 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:51.141 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:51.141 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:51.141 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.141 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:51.141 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.141 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.141 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.141 14:31:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:51.141 14:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.fvp /tmp/spdk.key-sha256.L1R /tmp/spdk.key-sha384.Uto /tmp/spdk.key-sha512.08h /tmp/spdk.key-sha512.SxX /tmp/spdk.key-sha384.LZj /tmp/spdk.key-sha256.gOM '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:51.141 ************************************ 00:14:51.141 END TEST nvmf_auth_target 00:14:51.141 ************************************ 00:14:51.141 00:14:51.141 real 2m55.588s 00:14:51.141 user 7m7.392s 00:14:51.141 sys 0m21.145s 00:14:51.141 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:51.141 14:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.141 14:31:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:51.141 14:31:30 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:14:51.141 14:31:30 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:51.141 14:31:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:51.141 14:31:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.141 14:31:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:51.400 ************************************ 00:14:51.400 START TEST nvmf_bdevio_no_huge 00:14:51.400 ************************************ 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:51.400 * Looking for test storage... 00:14:51.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:51.400 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:51.401 Cannot find device "nvmf_tgt_br" 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.401 Cannot find device "nvmf_tgt_br2" 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:51.401 Cannot find device "nvmf_tgt_br" 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:51.401 Cannot find device "nvmf_tgt_br2" 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:51.401 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:51.660 14:31:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:51.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:14:51.660 00:14:51.660 --- 10.0.0.2 ping statistics --- 00:14:51.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.660 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:51.660 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.660 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:14:51.660 00:14:51.660 --- 10.0.0.3 ping statistics --- 00:14:51.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.660 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:14:51.660 00:14:51.660 --- 10.0.0.1 ping statistics --- 00:14:51.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.660 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=83240 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 83240 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 83240 ']' 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.660 14:31:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:51.918 [2024-07-15 14:31:31.278445] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:14:51.918 [2024-07-15 14:31:31.278539] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:51.918 [2024-07-15 14:31:31.418776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:52.176 [2024-07-15 14:31:31.557307] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.176 [2024-07-15 14:31:31.557902] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.176 [2024-07-15 14:31:31.558524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.176 [2024-07-15 14:31:31.559072] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.176 [2024-07-15 14:31:31.559383] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.176 [2024-07-15 14:31:31.559773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:52.176 [2024-07-15 14:31:31.559881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:52.176 [2024-07-15 14:31:31.560012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:52.176 [2024-07-15 14:31:31.560602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:53.109 [2024-07-15 14:31:32.444524] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:53.109 Malloc0 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:53.109 [2024-07-15 14:31:32.482621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:53.109 { 00:14:53.109 "params": { 00:14:53.109 "name": "Nvme$subsystem", 00:14:53.109 "trtype": "$TEST_TRANSPORT", 00:14:53.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:53.109 "adrfam": "ipv4", 00:14:53.109 "trsvcid": "$NVMF_PORT", 00:14:53.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:53.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:53.109 "hdgst": ${hdgst:-false}, 00:14:53.109 "ddgst": ${ddgst:-false} 00:14:53.109 }, 00:14:53.109 "method": "bdev_nvme_attach_controller" 00:14:53.109 } 00:14:53.109 EOF 00:14:53.109 )") 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:14:53.109 14:31:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:53.109 "params": { 00:14:53.109 "name": "Nvme1", 00:14:53.109 "trtype": "tcp", 00:14:53.109 "traddr": "10.0.0.2", 00:14:53.109 "adrfam": "ipv4", 00:14:53.109 "trsvcid": "4420", 00:14:53.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:53.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:53.109 "hdgst": false, 00:14:53.109 "ddgst": false 00:14:53.109 }, 00:14:53.109 "method": "bdev_nvme_attach_controller" 00:14:53.109 }' 00:14:53.109 [2024-07-15 14:31:32.550815] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:14:53.109 [2024-07-15 14:31:32.550909] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83295 ] 00:14:53.109 [2024-07-15 14:31:32.693125] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:53.367 [2024-07-15 14:31:32.814329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.367 [2024-07-15 14:31:32.814424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.367 [2024-07-15 14:31:32.814430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.626 I/O targets: 00:14:53.626 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:53.626 00:14:53.626 00:14:53.626 CUnit - A unit testing framework for C - Version 2.1-3 00:14:53.626 http://cunit.sourceforge.net/ 00:14:53.626 00:14:53.626 00:14:53.626 Suite: bdevio tests on: Nvme1n1 00:14:53.626 Test: blockdev write read block ...passed 00:14:53.626 Test: blockdev write zeroes read block ...passed 00:14:53.626 Test: blockdev write zeroes read no split ...passed 00:14:53.626 Test: blockdev write zeroes read split ...passed 00:14:53.626 Test: blockdev write zeroes read split partial ...passed 00:14:53.626 Test: blockdev reset ...[2024-07-15 14:31:33.099968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:53.626 [2024-07-15 14:31:33.100089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1771460 (9): Bad file descriptor 00:14:53.626 passed 00:14:53.626 Test: blockdev write read 8 blocks ...[2024-07-15 14:31:33.115424] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:53.626 passed 00:14:53.626 Test: blockdev write read size > 128k ...passed 00:14:53.626 Test: blockdev write read invalid size ...passed 00:14:53.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:53.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:53.626 Test: blockdev write read max offset ...passed 00:14:53.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:53.884 Test: blockdev writev readv 8 blocks ...passed 00:14:53.884 Test: blockdev writev readv 30 x 1block ...passed 00:14:53.884 Test: blockdev writev readv block ...passed 00:14:53.884 Test: blockdev writev readv size > 128k ...passed 00:14:53.884 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:53.884 Test: blockdev comparev and writev ...[2024-07-15 14:31:33.289137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.884 [2024-07-15 14:31:33.289190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:53.884 [2024-07-15 14:31:33.289210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.884 [2024-07-15 14:31:33.289221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:53.884 [2024-07-15 14:31:33.289508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.884 [2024-07-15 14:31:33.289525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:53.884 [2024-07-15 14:31:33.289542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.884 [2024-07-15 14:31:33.289551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:53.884 [2024-07-15 14:31:33.289844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.884 [2024-07-15 14:31:33.289863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:53.884 [2024-07-15 14:31:33.289880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.884 [2024-07-15 14:31:33.289889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:53.884 [2024-07-15 14:31:33.290168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.884 [2024-07-15 14:31:33.290184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:53.884 [2024-07-15 14:31:33.290200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.884 [2024-07-15 14:31:33.290210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:53.884 passed 00:14:53.884 Test: blockdev nvme passthru rw ...passed 00:14:53.884 Test: blockdev nvme passthru vendor specific ...[2024-07-15 14:31:33.374141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.884 [2024-07-15 14:31:33.374189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:53.884 [2024-07-15 14:31:33.374314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.884 [2024-07-15 14:31:33.374330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:53.884 [2024-07-15 14:31:33.374439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.884 [2024-07-15 14:31:33.374454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:53.884 [2024-07-15 14:31:33.374564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.884 [2024-07-15 14:31:33.374579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:53.884 passed 00:14:53.884 Test: blockdev nvme admin passthru ...passed 00:14:53.884 Test: blockdev copy ...passed 00:14:53.884 00:14:53.884 Run Summary: Type Total Ran Passed Failed Inactive 00:14:53.884 suites 1 1 n/a 0 0 00:14:53.884 tests 23 23 23 0 0 00:14:53.884 asserts 152 152 152 0 n/a 00:14:53.884 00:14:53.884 Elapsed time = 0.920 seconds 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:54.452 rmmod nvme_tcp 00:14:54.452 rmmod nvme_fabrics 00:14:54.452 rmmod nvme_keyring 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 83240 ']' 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 83240 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 83240 ']' 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 83240 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83240 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83240' 00:14:54.452 killing process with pid 83240 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 83240 00:14:54.452 14:31:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 83240 00:14:55.018 14:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:55.018 14:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:55.018 14:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:55.018 14:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.018 14:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.018 14:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.018 14:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.018 14:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.018 14:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:55.018 00:14:55.018 real 0m3.610s 00:14:55.018 user 0m13.053s 00:14:55.018 sys 0m1.251s 00:14:55.018 14:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:55.018 14:31:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:55.018 ************************************ 00:14:55.018 END TEST nvmf_bdevio_no_huge 00:14:55.018 ************************************ 00:14:55.018 14:31:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:55.018 14:31:34 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:55.018 14:31:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:55.018 14:31:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.018 14:31:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.018 ************************************ 00:14:55.019 START TEST nvmf_tls 00:14:55.019 ************************************ 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:55.019 * Looking for test storage... 00:14:55.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:55.019 Cannot find device "nvmf_tgt_br" 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:55.019 Cannot find device "nvmf_tgt_br2" 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:55.019 Cannot find device "nvmf_tgt_br" 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:55.019 Cannot find device "nvmf_tgt_br2" 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:14:55.019 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:55.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:55.277 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:55.277 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:55.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:14:55.278 00:14:55.278 --- 10.0.0.2 ping statistics --- 00:14:55.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.278 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:55.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:55.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:14:55.278 00:14:55.278 --- 10.0.0.3 ping statistics --- 00:14:55.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.278 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:55.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:14:55.278 00:14:55.278 --- 10.0.0.1 ping statistics --- 00:14:55.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.278 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83478 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83478 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83478 ']' 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.278 14:31:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.537 [2024-07-15 14:31:34.941590] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:14:55.537 [2024-07-15 14:31:34.941894] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.537 [2024-07-15 14:31:35.086694] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.795 [2024-07-15 14:31:35.153450] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.795 [2024-07-15 14:31:35.153513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.795 [2024-07-15 14:31:35.153527] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.795 [2024-07-15 14:31:35.153537] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.795 [2024-07-15 14:31:35.153545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.795 [2024-07-15 14:31:35.153572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.362 14:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.362 14:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:56.362 14:31:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:56.362 14:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:56.362 14:31:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.620 14:31:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.620 14:31:35 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:56.620 14:31:35 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:56.879 true 00:14:56.879 14:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:56.879 14:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:14:57.148 14:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:14:57.148 14:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:57.148 14:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:57.407 14:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:57.407 14:31:36 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:14:57.664 14:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:14:57.665 14:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:57.665 14:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:57.962 14:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:14:57.962 14:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:58.244 14:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:14:58.244 14:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:58.244 14:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:58.244 14:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:58.501 14:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:14:58.501 14:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:58.501 14:31:37 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:58.757 14:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:58.757 14:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:59.014 14:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:14:59.014 14:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:59.014 14:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:59.272 14:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:59.272 14:31:38 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:59.529 14:31:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:59.786 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:59.786 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:14:59.786 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.kw3ETFvpl8 00:14:59.786 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:59.786 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.N9iDk5oXtw 00:14:59.786 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:59.786 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:59.786 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.kw3ETFvpl8 00:14:59.786 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.N9iDk5oXtw 00:14:59.786 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:00.063 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:00.321 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.kw3ETFvpl8 00:15:00.321 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kw3ETFvpl8 00:15:00.321 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:00.579 [2024-07-15 14:31:39.944278] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.579 14:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:00.837 14:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:01.094 [2024-07-15 14:31:40.512387] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:01.094 [2024-07-15 14:31:40.512598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.094 14:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:01.352 malloc0 00:15:01.352 14:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:01.610 14:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kw3ETFvpl8 00:15:02.176 [2024-07-15 14:31:41.487054] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:02.176 14:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.kw3ETFvpl8 00:15:12.150 Initializing NVMe Controllers 00:15:12.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:12.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:12.150 Initialization complete. Launching workers. 00:15:12.150 ======================================================== 00:15:12.150 Latency(us) 00:15:12.150 Device Information : IOPS MiB/s Average min max 00:15:12.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9242.85 36.10 6926.00 1512.13 10712.80 00:15:12.150 ======================================================== 00:15:12.150 Total : 9242.85 36.10 6926.00 1512.13 10712.80 00:15:12.150 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kw3ETFvpl8 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kw3ETFvpl8' 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83841 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83841 /var/tmp/bdevperf.sock 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83841 ']' 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.150 14:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.408 [2024-07-15 14:31:51.762403] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:15:12.408 [2024-07-15 14:31:51.762509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83841 ] 00:15:12.408 [2024-07-15 14:31:51.901014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.408 [2024-07-15 14:31:51.969870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.385 14:31:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.385 14:31:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:13.385 14:31:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kw3ETFvpl8 00:15:13.643 [2024-07-15 14:31:53.017786] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:13.643 [2024-07-15 14:31:53.017906] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:13.643 TLSTESTn1 00:15:13.643 14:31:53 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:13.643 Running I/O for 10 seconds... 00:15:25.851 00:15:25.851 Latency(us) 00:15:25.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.851 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:25.851 Verification LBA range: start 0x0 length 0x2000 00:15:25.851 TLSTESTn1 : 10.02 3794.27 14.82 0.00 0.00 33667.89 11379.43 24427.05 00:15:25.851 =================================================================================================================== 00:15:25.851 Total : 3794.27 14.82 0.00 0.00 33667.89 11379.43 24427.05 00:15:25.851 0 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83841 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83841 ']' 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83841 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83841 00:15:25.851 killing process with pid 83841 00:15:25.851 Received shutdown signal, test time was about 10.000000 seconds 00:15:25.851 00:15:25.851 Latency(us) 00:15:25.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.851 =================================================================================================================== 00:15:25.851 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83841' 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83841 00:15:25.851 [2024-07-15 14:32:03.294891] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83841 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N9iDk5oXtw 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N9iDk5oXtw 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:25.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.851 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.N9iDk5oXtw 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.N9iDk5oXtw' 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83993 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83993 /var/tmp/bdevperf.sock 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83993 ']' 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.852 [2024-07-15 14:32:03.501394] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:15:25.852 [2024-07-15 14:32:03.501637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83993 ] 00:15:25.852 [2024-07-15 14:32:03.635111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.852 [2024-07-15 14:32:03.693459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:25.852 14:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.N9iDk5oXtw 00:15:25.852 [2024-07-15 14:32:03.992197] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:25.852 [2024-07-15 14:32:03.992307] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:25.852 [2024-07-15 14:32:04.003146] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:25.852 [2024-07-15 14:32:04.003842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2015ca0 (107): Transport endpoint is not connected 00:15:25.852 [2024-07-15 14:32:04.004832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2015ca0 (9): Bad file descriptor 00:15:25.852 [2024-07-15 14:32:04.005829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:25.852 [2024-07-15 14:32:04.005853] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:25.852 [2024-07-15 14:32:04.005867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:25.852 2024/07/15 14:32:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.N9iDk5oXtw subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:25.852 request: 00:15:25.852 { 00:15:25.852 "method": "bdev_nvme_attach_controller", 00:15:25.852 "params": { 00:15:25.852 "name": "TLSTEST", 00:15:25.852 "trtype": "tcp", 00:15:25.852 "traddr": "10.0.0.2", 00:15:25.852 "adrfam": "ipv4", 00:15:25.852 "trsvcid": "4420", 00:15:25.852 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:25.852 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:25.852 "prchk_reftag": false, 00:15:25.852 "prchk_guard": false, 00:15:25.852 "hdgst": false, 00:15:25.852 "ddgst": false, 00:15:25.852 "psk": "/tmp/tmp.N9iDk5oXtw" 00:15:25.852 } 00:15:25.852 } 00:15:25.852 Got JSON-RPC error response 00:15:25.852 GoRPCClient: error on JSON-RPC call 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83993 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83993 ']' 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83993 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83993 00:15:25.852 killing process with pid 83993 00:15:25.852 Received shutdown signal, test time was about 10.000000 seconds 00:15:25.852 00:15:25.852 Latency(us) 00:15:25.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.852 =================================================================================================================== 00:15:25.852 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83993' 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83993 00:15:25.852 [2024-07-15 14:32:04.055990] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83993 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kw3ETFvpl8 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kw3ETFvpl8 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:25.852 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kw3ETFvpl8 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kw3ETFvpl8' 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84019 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84019 /var/tmp/bdevperf.sock 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84019 ']' 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.853 14:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.853 [2024-07-15 14:32:04.289375] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:15:25.853 [2024-07-15 14:32:04.289511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84019 ] 00:15:25.853 [2024-07-15 14:32:04.432822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.853 [2024-07-15 14:32:04.491316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.853 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:25.853 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:25.853 14:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.kw3ETFvpl8 00:15:26.122 [2024-07-15 14:32:05.548950] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:26.122 [2024-07-15 14:32:05.549063] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:26.122 [2024-07-15 14:32:05.553876] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:26.122 [2024-07-15 14:32:05.553919] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:26.122 [2024-07-15 14:32:05.553972] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:26.122 [2024-07-15 14:32:05.554572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcecca0 (107): Transport endpoint is not connected 00:15:26.122 [2024-07-15 14:32:05.555559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcecca0 (9): Bad file descriptor 00:15:26.122 [2024-07-15 14:32:05.556554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:26.122 [2024-07-15 14:32:05.556578] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:26.122 [2024-07-15 14:32:05.556592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:26.122 2024/07/15 14:32:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.kw3ETFvpl8 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:26.122 request: 00:15:26.122 { 00:15:26.122 "method": "bdev_nvme_attach_controller", 00:15:26.122 "params": { 00:15:26.122 "name": "TLSTEST", 00:15:26.122 "trtype": "tcp", 00:15:26.122 "traddr": "10.0.0.2", 00:15:26.122 "adrfam": "ipv4", 00:15:26.122 "trsvcid": "4420", 00:15:26.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.122 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:26.122 "prchk_reftag": false, 00:15:26.122 "prchk_guard": false, 00:15:26.122 "hdgst": false, 00:15:26.122 "ddgst": false, 00:15:26.122 "psk": "/tmp/tmp.kw3ETFvpl8" 00:15:26.122 } 00:15:26.122 } 00:15:26.122 Got JSON-RPC error response 00:15:26.122 GoRPCClient: error on JSON-RPC call 00:15:26.122 14:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84019 00:15:26.122 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84019 ']' 00:15:26.122 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84019 00:15:26.122 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:26.122 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:26.122 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84019 00:15:26.122 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:26.122 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:26.122 killing process with pid 84019 00:15:26.122 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84019' 00:15:26.122 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84019 00:15:26.122 Received shutdown signal, test time was about 10.000000 seconds 00:15:26.122 00:15:26.122 Latency(us) 00:15:26.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.122 =================================================================================================================== 00:15:26.122 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:26.122 [2024-07-15 14:32:05.597711] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:26.122 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84019 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kw3ETFvpl8 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kw3ETFvpl8 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kw3ETFvpl8 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kw3ETFvpl8' 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84065 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84065 /var/tmp/bdevperf.sock 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84065 ']' 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.388 14:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.388 [2024-07-15 14:32:05.805171] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:15:26.388 [2024-07-15 14:32:05.805259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84065 ] 00:15:26.388 [2024-07-15 14:32:05.939232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.647 [2024-07-15 14:32:05.996999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.647 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:26.647 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:26.647 14:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kw3ETFvpl8 00:15:26.905 [2024-07-15 14:32:06.290797] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:26.905 [2024-07-15 14:32:06.290925] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:26.905 [2024-07-15 14:32:06.296303] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:26.905 [2024-07-15 14:32:06.296344] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:26.905 [2024-07-15 14:32:06.296399] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:26.905 [2024-07-15 14:32:06.296991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109dca0 (107): Transport endpoint is not connected 00:15:26.905 [2024-07-15 14:32:06.297976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109dca0 (9): Bad file descriptor 00:15:26.905 [2024-07-15 14:32:06.298970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:26.905 [2024-07-15 14:32:06.299000] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:26.905 [2024-07-15 14:32:06.299017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:26.905 2024/07/15 14:32:06 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.kw3ETFvpl8 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:26.905 request: 00:15:26.905 { 00:15:26.905 "method": "bdev_nvme_attach_controller", 00:15:26.905 "params": { 00:15:26.905 "name": "TLSTEST", 00:15:26.905 "trtype": "tcp", 00:15:26.905 "traddr": "10.0.0.2", 00:15:26.905 "adrfam": "ipv4", 00:15:26.905 "trsvcid": "4420", 00:15:26.905 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:26.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:26.905 "prchk_reftag": false, 00:15:26.905 "prchk_guard": false, 00:15:26.905 "hdgst": false, 00:15:26.905 "ddgst": false, 00:15:26.905 "psk": "/tmp/tmp.kw3ETFvpl8" 00:15:26.905 } 00:15:26.905 } 00:15:26.905 Got JSON-RPC error response 00:15:26.905 GoRPCClient: error on JSON-RPC call 00:15:26.905 14:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84065 00:15:26.905 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84065 ']' 00:15:26.905 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84065 00:15:26.905 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:26.905 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:26.905 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84065 00:15:26.905 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:26.905 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:26.905 killing process with pid 84065 00:15:26.905 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84065' 00:15:26.905 Received shutdown signal, test time was about 10.000000 seconds 00:15:26.905 00:15:26.905 Latency(us) 00:15:26.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.905 =================================================================================================================== 00:15:26.905 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:26.905 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84065 00:15:26.905 [2024-07-15 14:32:06.351758] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:26.905 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84065 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84097 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84097 /var/tmp/bdevperf.sock 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84097 ']' 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.164 14:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.164 [2024-07-15 14:32:06.583876] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:15:27.164 [2024-07-15 14:32:06.584462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84097 ] 00:15:27.164 [2024-07-15 14:32:06.723865] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.422 [2024-07-15 14:32:06.784502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.989 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.989 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:27.989 14:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:28.247 [2024-07-15 14:32:07.783524] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:28.247 [2024-07-15 14:32:07.785130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc91240 (9): Bad file descriptor 00:15:28.247 [2024-07-15 14:32:07.786125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:28.247 [2024-07-15 14:32:07.786152] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:28.247 [2024-07-15 14:32:07.786167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:28.247 2024/07/15 14:32:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:28.247 request: 00:15:28.247 { 00:15:28.247 "method": "bdev_nvme_attach_controller", 00:15:28.247 "params": { 00:15:28.247 "name": "TLSTEST", 00:15:28.247 "trtype": "tcp", 00:15:28.247 "traddr": "10.0.0.2", 00:15:28.247 "adrfam": "ipv4", 00:15:28.247 "trsvcid": "4420", 00:15:28.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:28.247 "prchk_reftag": false, 00:15:28.247 "prchk_guard": false, 00:15:28.247 "hdgst": false, 00:15:28.247 "ddgst": false 00:15:28.247 } 00:15:28.247 } 00:15:28.247 Got JSON-RPC error response 00:15:28.247 GoRPCClient: error on JSON-RPC call 00:15:28.247 14:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84097 00:15:28.247 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84097 ']' 00:15:28.247 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84097 00:15:28.247 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:28.247 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:28.248 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84097 00:15:28.248 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:28.248 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:28.248 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84097' 00:15:28.248 killing process with pid 84097 00:15:28.248 Received shutdown signal, test time was about 10.000000 seconds 00:15:28.248 00:15:28.248 Latency(us) 00:15:28.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.248 =================================================================================================================== 00:15:28.248 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:28.248 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84097 00:15:28.248 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84097 00:15:28.506 14:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:28.506 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:28.506 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:28.506 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:28.506 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:28.506 14:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83478 00:15:28.506 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83478 ']' 00:15:28.506 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83478 00:15:28.506 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:28.506 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:28.506 14:32:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83478 00:15:28.506 14:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:28.506 14:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:28.506 14:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83478' 00:15:28.506 killing process with pid 83478 00:15:28.506 14:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83478 00:15:28.506 [2024-07-15 14:32:08.012744] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:28.506 14:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83478 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ga96Pz0MiA 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ga96Pz0MiA 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84147 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84147 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84147 ']' 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.765 14:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.765 [2024-07-15 14:32:08.280252] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:15:28.765 [2024-07-15 14:32:08.280342] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.024 [2024-07-15 14:32:08.415807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.024 [2024-07-15 14:32:08.472144] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.024 [2024-07-15 14:32:08.472189] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.024 [2024-07-15 14:32:08.472201] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.024 [2024-07-15 14:32:08.472210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.024 [2024-07-15 14:32:08.472217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.024 [2024-07-15 14:32:08.472249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.960 14:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.960 14:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:29.960 14:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:29.960 14:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:29.960 14:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.960 14:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.960 14:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ga96Pz0MiA 00:15:29.960 14:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ga96Pz0MiA 00:15:29.960 14:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:30.219 [2024-07-15 14:32:09.597558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.219 14:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:30.477 14:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:30.735 [2024-07-15 14:32:10.101649] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:30.735 [2024-07-15 14:32:10.101897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.735 14:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:30.994 malloc0 00:15:30.994 14:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:31.251 14:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ga96Pz0MiA 00:15:31.519 [2024-07-15 14:32:10.880249] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ga96Pz0MiA 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ga96Pz0MiA' 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84255 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84255 /var/tmp/bdevperf.sock 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84255 ']' 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:31.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:31.519 14:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.519 [2024-07-15 14:32:10.949252] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:15:31.519 [2024-07-15 14:32:10.949333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84255 ] 00:15:31.519 [2024-07-15 14:32:11.080386] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.809 [2024-07-15 14:32:11.169093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.416 14:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.416 14:32:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:32.416 14:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ga96Pz0MiA 00:15:32.674 [2024-07-15 14:32:12.218898] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:32.674 [2024-07-15 14:32:12.219012] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:32.933 TLSTESTn1 00:15:32.933 14:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:32.933 Running I/O for 10 seconds... 00:15:42.972 00:15:42.972 Latency(us) 00:15:42.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.972 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:42.972 Verification LBA range: start 0x0 length 0x2000 00:15:42.972 TLSTESTn1 : 10.02 3996.68 15.61 0.00 0.00 31965.69 6285.50 27167.65 00:15:42.972 =================================================================================================================== 00:15:42.972 Total : 3996.68 15.61 0.00 0.00 31965.69 6285.50 27167.65 00:15:42.972 0 00:15:42.972 14:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:42.972 14:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84255 00:15:42.972 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84255 ']' 00:15:42.972 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84255 00:15:42.972 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:42.972 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:42.972 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84255 00:15:42.972 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:42.972 killing process with pid 84255 00:15:42.972 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:42.972 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84255' 00:15:42.972 Received shutdown signal, test time was about 10.000000 seconds 00:15:42.972 00:15:42.972 Latency(us) 00:15:42.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.972 =================================================================================================================== 00:15:42.972 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:42.972 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84255 00:15:42.972 [2024-07-15 14:32:22.530581] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:42.972 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84255 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ga96Pz0MiA 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ga96Pz0MiA 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ga96Pz0MiA 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ga96Pz0MiA 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ga96Pz0MiA' 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84402 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84402 /var/tmp/bdevperf.sock 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84402 ']' 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.231 14:32:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:43.231 [2024-07-15 14:32:22.742611] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:15:43.231 [2024-07-15 14:32:22.742693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84402 ] 00:15:43.488 [2024-07-15 14:32:22.876733] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.488 [2024-07-15 14:32:22.944820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.420 14:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.420 14:32:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:44.420 14:32:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ga96Pz0MiA 00:15:44.421 [2024-07-15 14:32:24.000006] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:44.421 [2024-07-15 14:32:24.000081] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:44.421 [2024-07-15 14:32:24.000091] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ga96Pz0MiA 00:15:44.421 2024/07/15 14:32:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.ga96Pz0MiA subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:15:44.421 request: 00:15:44.421 { 00:15:44.421 "method": "bdev_nvme_attach_controller", 00:15:44.421 "params": { 00:15:44.421 "name": "TLSTEST", 00:15:44.421 "trtype": "tcp", 00:15:44.421 "traddr": "10.0.0.2", 00:15:44.421 "adrfam": "ipv4", 00:15:44.421 "trsvcid": "4420", 00:15:44.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:44.421 "prchk_reftag": false, 00:15:44.421 "prchk_guard": false, 00:15:44.421 "hdgst": false, 00:15:44.421 "ddgst": false, 00:15:44.421 "psk": "/tmp/tmp.ga96Pz0MiA" 00:15:44.421 } 00:15:44.421 } 00:15:44.421 Got JSON-RPC error response 00:15:44.421 GoRPCClient: error on JSON-RPC call 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84402 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84402 ']' 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84402 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84402 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:44.679 killing process with pid 84402 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84402' 00:15:44.679 Received shutdown signal, test time was about 10.000000 seconds 00:15:44.679 00:15:44.679 Latency(us) 00:15:44.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.679 =================================================================================================================== 00:15:44.679 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84402 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84402 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 84147 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84147 ']' 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84147 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84147 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:44.679 killing process with pid 84147 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84147' 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84147 00:15:44.679 [2024-07-15 14:32:24.232147] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:44.679 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84147 00:15:44.937 14:32:24 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:44.937 14:32:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:44.937 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:44.937 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:44.937 14:32:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84453 00:15:44.937 14:32:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:44.937 14:32:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84453 00:15:44.937 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84453 ']' 00:15:44.937 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.937 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.937 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.937 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.937 14:32:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:44.937 [2024-07-15 14:32:24.456349] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:15:44.937 [2024-07-15 14:32:24.456441] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.196 [2024-07-15 14:32:24.592504] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.196 [2024-07-15 14:32:24.650817] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.196 [2024-07-15 14:32:24.650880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.196 [2024-07-15 14:32:24.650891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.196 [2024-07-15 14:32:24.650900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.196 [2024-07-15 14:32:24.650907] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.196 [2024-07-15 14:32:24.650938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.125 14:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.125 14:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:46.125 14:32:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:46.125 14:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:46.126 14:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:46.126 14:32:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.126 14:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ga96Pz0MiA 00:15:46.126 14:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:46.126 14:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ga96Pz0MiA 00:15:46.126 14:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:15:46.126 14:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.126 14:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:15:46.126 14:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.126 14:32:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ga96Pz0MiA 00:15:46.126 14:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ga96Pz0MiA 00:15:46.126 14:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:46.126 [2024-07-15 14:32:25.702648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.383 14:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:46.383 14:32:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:46.640 [2024-07-15 14:32:26.214759] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:46.640 [2024-07-15 14:32:26.214973] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.897 14:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:47.154 malloc0 00:15:47.154 14:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:47.411 14:32:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ga96Pz0MiA 00:15:47.411 [2024-07-15 14:32:26.985550] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:47.411 [2024-07-15 14:32:26.985593] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:47.411 [2024-07-15 14:32:26.985626] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:47.411 2024/07/15 14:32:26 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.ga96Pz0MiA], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:15:47.411 request: 00:15:47.411 { 00:15:47.411 "method": "nvmf_subsystem_add_host", 00:15:47.411 "params": { 00:15:47.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:47.411 "host": "nqn.2016-06.io.spdk:host1", 00:15:47.411 "psk": "/tmp/tmp.ga96Pz0MiA" 00:15:47.411 } 00:15:47.411 } 00:15:47.411 Got JSON-RPC error response 00:15:47.411 GoRPCClient: error on JSON-RPC call 00:15:47.411 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:47.411 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:47.411 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84453 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84453 ']' 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84453 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84453 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:47.668 killing process with pid 84453 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84453' 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84453 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84453 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ga96Pz0MiA 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84570 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84570 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84570 ']' 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.668 14:32:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.926 [2024-07-15 14:32:27.272149] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:15:47.926 [2024-07-15 14:32:27.272250] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.926 [2024-07-15 14:32:27.403190] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.926 [2024-07-15 14:32:27.460048] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.926 [2024-07-15 14:32:27.460099] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.926 [2024-07-15 14:32:27.460111] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.926 [2024-07-15 14:32:27.460119] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.926 [2024-07-15 14:32:27.460126] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.926 [2024-07-15 14:32:27.460157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.857 14:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.857 14:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:48.857 14:32:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:48.857 14:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:48.857 14:32:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:48.857 14:32:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.857 14:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ga96Pz0MiA 00:15:48.857 14:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ga96Pz0MiA 00:15:48.857 14:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:49.114 [2024-07-15 14:32:28.454088] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.114 14:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:49.372 14:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:49.630 [2024-07-15 14:32:28.970183] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:49.630 [2024-07-15 14:32:28.970393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.630 14:32:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:49.888 malloc0 00:15:49.888 14:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:50.147 14:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ga96Pz0MiA 00:15:50.147 [2024-07-15 14:32:29.720894] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:50.405 14:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:50.405 14:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84667 00:15:50.405 14:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:50.405 14:32:29 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84667 /var/tmp/bdevperf.sock 00:15:50.405 14:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84667 ']' 00:15:50.405 14:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.405 14:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.405 14:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.405 14:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.405 14:32:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:50.405 [2024-07-15 14:32:29.800391] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:15:50.405 [2024-07-15 14:32:29.800517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84667 ] 00:15:50.405 [2024-07-15 14:32:29.939726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.662 [2024-07-15 14:32:30.000013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.281 14:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.281 14:32:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:51.281 14:32:30 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ga96Pz0MiA 00:15:51.539 [2024-07-15 14:32:30.964821] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:51.539 [2024-07-15 14:32:30.964934] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:51.539 TLSTESTn1 00:15:51.539 14:32:31 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:52.106 14:32:31 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:15:52.106 "subsystems": [ 00:15:52.106 { 00:15:52.106 "subsystem": "keyring", 00:15:52.106 "config": [] 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "subsystem": "iobuf", 00:15:52.106 "config": [ 00:15:52.106 { 00:15:52.106 "method": "iobuf_set_options", 00:15:52.106 "params": { 00:15:52.106 "large_bufsize": 135168, 00:15:52.106 "large_pool_count": 1024, 00:15:52.106 "small_bufsize": 8192, 00:15:52.106 "small_pool_count": 8192 00:15:52.106 } 00:15:52.106 } 00:15:52.106 ] 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "subsystem": "sock", 00:15:52.106 "config": [ 00:15:52.106 { 00:15:52.106 "method": "sock_set_default_impl", 00:15:52.106 "params": { 00:15:52.106 "impl_name": "posix" 00:15:52.106 } 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "method": "sock_impl_set_options", 00:15:52.106 "params": { 00:15:52.106 "enable_ktls": false, 00:15:52.106 "enable_placement_id": 0, 00:15:52.106 "enable_quickack": false, 00:15:52.106 "enable_recv_pipe": true, 00:15:52.106 "enable_zerocopy_send_client": false, 00:15:52.106 "enable_zerocopy_send_server": true, 00:15:52.106 "impl_name": "ssl", 00:15:52.106 "recv_buf_size": 4096, 00:15:52.106 "send_buf_size": 4096, 00:15:52.106 "tls_version": 0, 00:15:52.106 "zerocopy_threshold": 0 00:15:52.106 } 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "method": "sock_impl_set_options", 00:15:52.106 "params": { 00:15:52.106 "enable_ktls": false, 00:15:52.106 "enable_placement_id": 0, 00:15:52.106 "enable_quickack": false, 00:15:52.106 "enable_recv_pipe": true, 00:15:52.106 "enable_zerocopy_send_client": false, 00:15:52.106 "enable_zerocopy_send_server": true, 00:15:52.106 "impl_name": "posix", 00:15:52.106 "recv_buf_size": 2097152, 00:15:52.106 "send_buf_size": 2097152, 00:15:52.106 "tls_version": 0, 00:15:52.106 "zerocopy_threshold": 0 00:15:52.106 } 00:15:52.106 } 00:15:52.106 ] 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "subsystem": "vmd", 00:15:52.106 "config": [] 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "subsystem": "accel", 00:15:52.106 "config": [ 00:15:52.106 { 00:15:52.106 "method": "accel_set_options", 00:15:52.106 "params": { 00:15:52.106 "buf_count": 2048, 00:15:52.106 "large_cache_size": 16, 00:15:52.106 "sequence_count": 2048, 00:15:52.106 "small_cache_size": 128, 00:15:52.106 "task_count": 2048 00:15:52.106 } 00:15:52.106 } 00:15:52.106 ] 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "subsystem": "bdev", 00:15:52.106 "config": [ 00:15:52.106 { 00:15:52.106 "method": "bdev_set_options", 00:15:52.106 "params": { 00:15:52.106 "bdev_auto_examine": true, 00:15:52.106 "bdev_io_cache_size": 256, 00:15:52.106 "bdev_io_pool_size": 65535, 00:15:52.106 "iobuf_large_cache_size": 16, 00:15:52.106 "iobuf_small_cache_size": 128 00:15:52.106 } 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "method": "bdev_raid_set_options", 00:15:52.106 "params": { 00:15:52.106 "process_window_size_kb": 1024 00:15:52.106 } 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "method": "bdev_iscsi_set_options", 00:15:52.106 "params": { 00:15:52.106 "timeout_sec": 30 00:15:52.106 } 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "method": "bdev_nvme_set_options", 00:15:52.106 "params": { 00:15:52.106 "action_on_timeout": "none", 00:15:52.106 "allow_accel_sequence": false, 00:15:52.106 "arbitration_burst": 0, 00:15:52.106 "bdev_retry_count": 3, 00:15:52.106 "ctrlr_loss_timeout_sec": 0, 00:15:52.106 "delay_cmd_submit": true, 00:15:52.106 "dhchap_dhgroups": [ 00:15:52.106 "null", 00:15:52.106 "ffdhe2048", 00:15:52.106 "ffdhe3072", 00:15:52.106 "ffdhe4096", 00:15:52.106 "ffdhe6144", 00:15:52.106 "ffdhe8192" 00:15:52.106 ], 00:15:52.106 "dhchap_digests": [ 00:15:52.106 "sha256", 00:15:52.106 "sha384", 00:15:52.106 "sha512" 00:15:52.106 ], 00:15:52.106 "disable_auto_failback": false, 00:15:52.106 "fast_io_fail_timeout_sec": 0, 00:15:52.106 "generate_uuids": false, 00:15:52.106 "high_priority_weight": 0, 00:15:52.106 "io_path_stat": false, 00:15:52.106 "io_queue_requests": 0, 00:15:52.106 "keep_alive_timeout_ms": 10000, 00:15:52.106 "low_priority_weight": 0, 00:15:52.106 "medium_priority_weight": 0, 00:15:52.106 "nvme_adminq_poll_period_us": 10000, 00:15:52.106 "nvme_error_stat": false, 00:15:52.106 "nvme_ioq_poll_period_us": 0, 00:15:52.106 "rdma_cm_event_timeout_ms": 0, 00:15:52.106 "rdma_max_cq_size": 0, 00:15:52.106 "rdma_srq_size": 0, 00:15:52.106 "reconnect_delay_sec": 0, 00:15:52.106 "timeout_admin_us": 0, 00:15:52.106 "timeout_us": 0, 00:15:52.106 "transport_ack_timeout": 0, 00:15:52.106 "transport_retry_count": 4, 00:15:52.106 "transport_tos": 0 00:15:52.106 } 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "method": "bdev_nvme_set_hotplug", 00:15:52.106 "params": { 00:15:52.106 "enable": false, 00:15:52.106 "period_us": 100000 00:15:52.106 } 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "method": "bdev_malloc_create", 00:15:52.106 "params": { 00:15:52.106 "block_size": 4096, 00:15:52.106 "name": "malloc0", 00:15:52.106 "num_blocks": 8192, 00:15:52.106 "optimal_io_boundary": 0, 00:15:52.106 "physical_block_size": 4096, 00:15:52.106 "uuid": "b047a7bc-47e4-4eca-8624-b283b2c0b7ee" 00:15:52.106 } 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "method": "bdev_wait_for_examine" 00:15:52.106 } 00:15:52.106 ] 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "subsystem": "nbd", 00:15:52.106 "config": [] 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "subsystem": "scheduler", 00:15:52.106 "config": [ 00:15:52.106 { 00:15:52.106 "method": "framework_set_scheduler", 00:15:52.106 "params": { 00:15:52.106 "name": "static" 00:15:52.106 } 00:15:52.106 } 00:15:52.106 ] 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "subsystem": "nvmf", 00:15:52.106 "config": [ 00:15:52.106 { 00:15:52.106 "method": "nvmf_set_config", 00:15:52.106 "params": { 00:15:52.106 "admin_cmd_passthru": { 00:15:52.106 "identify_ctrlr": false 00:15:52.106 }, 00:15:52.106 "discovery_filter": "match_any" 00:15:52.106 } 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "method": "nvmf_set_max_subsystems", 00:15:52.106 "params": { 00:15:52.106 "max_subsystems": 1024 00:15:52.106 } 00:15:52.106 }, 00:15:52.106 { 00:15:52.106 "method": "nvmf_set_crdt", 00:15:52.106 "params": { 00:15:52.106 "crdt1": 0, 00:15:52.106 "crdt2": 0, 00:15:52.106 "crdt3": 0 00:15:52.106 } 00:15:52.107 }, 00:15:52.107 { 00:15:52.107 "method": "nvmf_create_transport", 00:15:52.107 "params": { 00:15:52.107 "abort_timeout_sec": 1, 00:15:52.107 "ack_timeout": 0, 00:15:52.107 "buf_cache_size": 4294967295, 00:15:52.107 "c2h_success": false, 00:15:52.107 "data_wr_pool_size": 0, 00:15:52.107 "dif_insert_or_strip": false, 00:15:52.107 "in_capsule_data_size": 4096, 00:15:52.107 "io_unit_size": 131072, 00:15:52.107 "max_aq_depth": 128, 00:15:52.107 "max_io_qpairs_per_ctrlr": 127, 00:15:52.107 "max_io_size": 131072, 00:15:52.107 "max_queue_depth": 128, 00:15:52.107 "num_shared_buffers": 511, 00:15:52.107 "sock_priority": 0, 00:15:52.107 "trtype": "TCP", 00:15:52.107 "zcopy": false 00:15:52.107 } 00:15:52.107 }, 00:15:52.107 { 00:15:52.107 "method": "nvmf_create_subsystem", 00:15:52.107 "params": { 00:15:52.107 "allow_any_host": false, 00:15:52.107 "ana_reporting": false, 00:15:52.107 "max_cntlid": 65519, 00:15:52.107 "max_namespaces": 10, 00:15:52.107 "min_cntlid": 1, 00:15:52.107 "model_number": "SPDK bdev Controller", 00:15:52.107 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.107 "serial_number": "SPDK00000000000001" 00:15:52.107 } 00:15:52.107 }, 00:15:52.107 { 00:15:52.107 "method": "nvmf_subsystem_add_host", 00:15:52.107 "params": { 00:15:52.107 "host": "nqn.2016-06.io.spdk:host1", 00:15:52.107 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.107 "psk": "/tmp/tmp.ga96Pz0MiA" 00:15:52.107 } 00:15:52.107 }, 00:15:52.107 { 00:15:52.107 "method": "nvmf_subsystem_add_ns", 00:15:52.107 "params": { 00:15:52.107 "namespace": { 00:15:52.107 "bdev_name": "malloc0", 00:15:52.107 "nguid": "B047A7BC47E44ECA8624B283B2C0B7EE", 00:15:52.107 "no_auto_visible": false, 00:15:52.107 "nsid": 1, 00:15:52.107 "uuid": "b047a7bc-47e4-4eca-8624-b283b2c0b7ee" 00:15:52.107 }, 00:15:52.107 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:52.107 } 00:15:52.107 }, 00:15:52.107 { 00:15:52.107 "method": "nvmf_subsystem_add_listener", 00:15:52.107 "params": { 00:15:52.107 "listen_address": { 00:15:52.107 "adrfam": "IPv4", 00:15:52.107 "traddr": "10.0.0.2", 00:15:52.107 "trsvcid": "4420", 00:15:52.107 "trtype": "TCP" 00:15:52.107 }, 00:15:52.107 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.107 "secure_channel": true 00:15:52.107 } 00:15:52.107 } 00:15:52.107 ] 00:15:52.107 } 00:15:52.107 ] 00:15:52.107 }' 00:15:52.107 14:32:31 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:52.366 14:32:31 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:52.366 "subsystems": [ 00:15:52.366 { 00:15:52.366 "subsystem": "keyring", 00:15:52.366 "config": [] 00:15:52.366 }, 00:15:52.366 { 00:15:52.366 "subsystem": "iobuf", 00:15:52.366 "config": [ 00:15:52.366 { 00:15:52.366 "method": "iobuf_set_options", 00:15:52.366 "params": { 00:15:52.366 "large_bufsize": 135168, 00:15:52.366 "large_pool_count": 1024, 00:15:52.366 "small_bufsize": 8192, 00:15:52.366 "small_pool_count": 8192 00:15:52.366 } 00:15:52.366 } 00:15:52.366 ] 00:15:52.366 }, 00:15:52.366 { 00:15:52.366 "subsystem": "sock", 00:15:52.366 "config": [ 00:15:52.366 { 00:15:52.366 "method": "sock_set_default_impl", 00:15:52.366 "params": { 00:15:52.366 "impl_name": "posix" 00:15:52.366 } 00:15:52.366 }, 00:15:52.366 { 00:15:52.366 "method": "sock_impl_set_options", 00:15:52.366 "params": { 00:15:52.366 "enable_ktls": false, 00:15:52.366 "enable_placement_id": 0, 00:15:52.366 "enable_quickack": false, 00:15:52.366 "enable_recv_pipe": true, 00:15:52.366 "enable_zerocopy_send_client": false, 00:15:52.366 "enable_zerocopy_send_server": true, 00:15:52.366 "impl_name": "ssl", 00:15:52.366 "recv_buf_size": 4096, 00:15:52.366 "send_buf_size": 4096, 00:15:52.366 "tls_version": 0, 00:15:52.366 "zerocopy_threshold": 0 00:15:52.366 } 00:15:52.366 }, 00:15:52.366 { 00:15:52.366 "method": "sock_impl_set_options", 00:15:52.366 "params": { 00:15:52.366 "enable_ktls": false, 00:15:52.366 "enable_placement_id": 0, 00:15:52.366 "enable_quickack": false, 00:15:52.366 "enable_recv_pipe": true, 00:15:52.366 "enable_zerocopy_send_client": false, 00:15:52.366 "enable_zerocopy_send_server": true, 00:15:52.366 "impl_name": "posix", 00:15:52.366 "recv_buf_size": 2097152, 00:15:52.366 "send_buf_size": 2097152, 00:15:52.366 "tls_version": 0, 00:15:52.366 "zerocopy_threshold": 0 00:15:52.366 } 00:15:52.366 } 00:15:52.366 ] 00:15:52.366 }, 00:15:52.366 { 00:15:52.366 "subsystem": "vmd", 00:15:52.366 "config": [] 00:15:52.366 }, 00:15:52.366 { 00:15:52.366 "subsystem": "accel", 00:15:52.366 "config": [ 00:15:52.366 { 00:15:52.366 "method": "accel_set_options", 00:15:52.366 "params": { 00:15:52.366 "buf_count": 2048, 00:15:52.366 "large_cache_size": 16, 00:15:52.366 "sequence_count": 2048, 00:15:52.366 "small_cache_size": 128, 00:15:52.366 "task_count": 2048 00:15:52.366 } 00:15:52.366 } 00:15:52.366 ] 00:15:52.366 }, 00:15:52.366 { 00:15:52.366 "subsystem": "bdev", 00:15:52.366 "config": [ 00:15:52.366 { 00:15:52.366 "method": "bdev_set_options", 00:15:52.366 "params": { 00:15:52.366 "bdev_auto_examine": true, 00:15:52.366 "bdev_io_cache_size": 256, 00:15:52.366 "bdev_io_pool_size": 65535, 00:15:52.366 "iobuf_large_cache_size": 16, 00:15:52.366 "iobuf_small_cache_size": 128 00:15:52.366 } 00:15:52.366 }, 00:15:52.366 { 00:15:52.366 "method": "bdev_raid_set_options", 00:15:52.366 "params": { 00:15:52.366 "process_window_size_kb": 1024 00:15:52.366 } 00:15:52.366 }, 00:15:52.366 { 00:15:52.366 "method": "bdev_iscsi_set_options", 00:15:52.366 "params": { 00:15:52.366 "timeout_sec": 30 00:15:52.366 } 00:15:52.366 }, 00:15:52.366 { 00:15:52.366 "method": "bdev_nvme_set_options", 00:15:52.366 "params": { 00:15:52.366 "action_on_timeout": "none", 00:15:52.366 "allow_accel_sequence": false, 00:15:52.366 "arbitration_burst": 0, 00:15:52.367 "bdev_retry_count": 3, 00:15:52.367 "ctrlr_loss_timeout_sec": 0, 00:15:52.367 "delay_cmd_submit": true, 00:15:52.367 "dhchap_dhgroups": [ 00:15:52.367 "null", 00:15:52.367 "ffdhe2048", 00:15:52.367 "ffdhe3072", 00:15:52.367 "ffdhe4096", 00:15:52.367 "ffdhe6144", 00:15:52.367 "ffdhe8192" 00:15:52.367 ], 00:15:52.367 "dhchap_digests": [ 00:15:52.367 "sha256", 00:15:52.367 "sha384", 00:15:52.367 "sha512" 00:15:52.367 ], 00:15:52.367 "disable_auto_failback": false, 00:15:52.367 "fast_io_fail_timeout_sec": 0, 00:15:52.367 "generate_uuids": false, 00:15:52.367 "high_priority_weight": 0, 00:15:52.367 "io_path_stat": false, 00:15:52.367 "io_queue_requests": 512, 00:15:52.367 "keep_alive_timeout_ms": 10000, 00:15:52.367 "low_priority_weight": 0, 00:15:52.367 "medium_priority_weight": 0, 00:15:52.367 "nvme_adminq_poll_period_us": 10000, 00:15:52.367 "nvme_error_stat": false, 00:15:52.367 "nvme_ioq_poll_period_us": 0, 00:15:52.367 "rdma_cm_event_timeout_ms": 0, 00:15:52.367 "rdma_max_cq_size": 0, 00:15:52.367 "rdma_srq_size": 0, 00:15:52.367 "reconnect_delay_sec": 0, 00:15:52.367 "timeout_admin_us": 0, 00:15:52.367 "timeout_us": 0, 00:15:52.367 "transport_ack_timeout": 0, 00:15:52.367 "transport_retry_count": 4, 00:15:52.367 "transport_tos": 0 00:15:52.367 } 00:15:52.367 }, 00:15:52.367 { 00:15:52.367 "method": "bdev_nvme_attach_controller", 00:15:52.367 "params": { 00:15:52.367 "adrfam": "IPv4", 00:15:52.367 "ctrlr_loss_timeout_sec": 0, 00:15:52.367 "ddgst": false, 00:15:52.367 "fast_io_fail_timeout_sec": 0, 00:15:52.367 "hdgst": false, 00:15:52.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:52.367 "name": "TLSTEST", 00:15:52.367 "prchk_guard": false, 00:15:52.367 "prchk_reftag": false, 00:15:52.367 "psk": "/tmp/tmp.ga96Pz0MiA", 00:15:52.367 "reconnect_delay_sec": 0, 00:15:52.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.367 "traddr": "10.0.0.2", 00:15:52.367 "trsvcid": "4420", 00:15:52.367 "trtype": "TCP" 00:15:52.367 } 00:15:52.367 }, 00:15:52.367 { 00:15:52.367 "method": "bdev_nvme_set_hotplug", 00:15:52.367 "params": { 00:15:52.367 "enable": false, 00:15:52.367 "period_us": 100000 00:15:52.367 } 00:15:52.367 }, 00:15:52.367 { 00:15:52.367 "method": "bdev_wait_for_examine" 00:15:52.367 } 00:15:52.367 ] 00:15:52.367 }, 00:15:52.367 { 00:15:52.367 "subsystem": "nbd", 00:15:52.367 "config": [] 00:15:52.367 } 00:15:52.367 ] 00:15:52.367 }' 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 84667 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84667 ']' 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84667 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84667 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:52.367 killing process with pid 84667 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84667' 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84667 00:15:52.367 Received shutdown signal, test time was about 10.000000 seconds 00:15:52.367 00:15:52.367 Latency(us) 00:15:52.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.367 =================================================================================================================== 00:15:52.367 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:52.367 [2024-07-15 14:32:31.741598] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84667 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84570 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84570 ']' 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84570 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84570 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:52.367 killing process with pid 84570 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84570' 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84570 00:15:52.367 [2024-07-15 14:32:31.931897] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:52.367 14:32:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84570 00:15:52.627 14:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:52.627 14:32:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.627 14:32:32 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:15:52.627 "subsystems": [ 00:15:52.627 { 00:15:52.627 "subsystem": "keyring", 00:15:52.627 "config": [] 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "subsystem": "iobuf", 00:15:52.627 "config": [ 00:15:52.627 { 00:15:52.627 "method": "iobuf_set_options", 00:15:52.627 "params": { 00:15:52.627 "large_bufsize": 135168, 00:15:52.627 "large_pool_count": 1024, 00:15:52.627 "small_bufsize": 8192, 00:15:52.627 "small_pool_count": 8192 00:15:52.627 } 00:15:52.627 } 00:15:52.627 ] 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "subsystem": "sock", 00:15:52.627 "config": [ 00:15:52.627 { 00:15:52.627 "method": "sock_set_default_impl", 00:15:52.627 "params": { 00:15:52.627 "impl_name": "posix" 00:15:52.627 } 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "method": "sock_impl_set_options", 00:15:52.627 "params": { 00:15:52.627 "enable_ktls": false, 00:15:52.627 "enable_placement_id": 0, 00:15:52.627 "enable_quickack": false, 00:15:52.627 "enable_recv_pipe": true, 00:15:52.627 "enable_zerocopy_send_client": false, 00:15:52.627 "enable_zerocopy_send_server": true, 00:15:52.627 "impl_name": "ssl", 00:15:52.627 "recv_buf_size": 4096, 00:15:52.627 "send_buf_size": 4096, 00:15:52.627 "tls_version": 0, 00:15:52.627 "zerocopy_threshold": 0 00:15:52.627 } 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "method": "sock_impl_set_options", 00:15:52.627 "params": { 00:15:52.627 "enable_ktls": false, 00:15:52.627 "enable_placement_id": 0, 00:15:52.627 "enable_quickack": false, 00:15:52.627 "enable_recv_pipe": true, 00:15:52.627 "enable_zerocopy_send_client": false, 00:15:52.627 "enable_zerocopy_send_server": true, 00:15:52.627 "impl_name": "posix", 00:15:52.627 "recv_buf_size": 2097152, 00:15:52.627 "send_buf_size": 2097152, 00:15:52.627 "tls_version": 0, 00:15:52.627 "zerocopy_threshold": 0 00:15:52.627 } 00:15:52.627 } 00:15:52.627 ] 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "subsystem": "vmd", 00:15:52.627 "config": [] 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "subsystem": "accel", 00:15:52.627 "config": [ 00:15:52.627 { 00:15:52.627 "method": "accel_set_options", 00:15:52.627 "params": { 00:15:52.627 "buf_count": 2048, 00:15:52.627 "large_cache_size": 16, 00:15:52.627 "sequence_count": 2048, 00:15:52.627 "small_cache_size": 128, 00:15:52.627 "task_count": 2048 00:15:52.627 } 00:15:52.627 } 00:15:52.627 ] 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "subsystem": "bdev", 00:15:52.627 "config": [ 00:15:52.627 { 00:15:52.627 "method": "bdev_set_options", 00:15:52.627 "params": { 00:15:52.627 "bdev_auto_examine": true, 00:15:52.627 "bdev_io_cache_size": 256, 00:15:52.627 "bdev_io_pool_size": 65535, 00:15:52.627 "iobuf_large_cache_size": 16, 00:15:52.627 "iobuf_small_cache_size": 128 00:15:52.627 } 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "method": "bdev_raid_set_options", 00:15:52.627 "params": { 00:15:52.627 "process_window_size_kb": 1024 00:15:52.627 } 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "method": "bdev_iscsi_set_options", 00:15:52.627 "params": { 00:15:52.627 "timeout_sec": 30 00:15:52.627 } 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "method": "bdev_nvme_set_options", 00:15:52.627 "params": { 00:15:52.627 "action_on_timeout": "none", 00:15:52.627 "allow_accel_sequence": false, 00:15:52.627 "arbitration_burst": 0, 00:15:52.627 "bdev_retry_count": 3, 00:15:52.627 "ctrlr_loss_timeout_sec": 0, 00:15:52.627 "delay_cmd_submit": true, 00:15:52.627 "dhchap_dhgroups": [ 00:15:52.627 "null", 00:15:52.627 "ffdhe2048", 00:15:52.627 "ffdhe3072", 00:15:52.627 "ffdhe4096", 00:15:52.627 "ffdhe6144", 00:15:52.627 "ffdhe8192" 00:15:52.627 ], 00:15:52.627 "dhchap_digests": [ 00:15:52.627 "sha256", 00:15:52.627 "sha384", 00:15:52.627 "sha512" 00:15:52.627 ], 00:15:52.627 "disable_auto_failback": false, 00:15:52.627 "fast_io_fail_timeout_sec": 0, 00:15:52.627 "generate_uuids": false, 00:15:52.627 "high_priority_weight": 0, 00:15:52.627 "io_path_stat": false, 00:15:52.627 "io_queue_requests": 0, 00:15:52.627 "keep_alive_timeout_ms": 10000, 00:15:52.627 "low_priority_weight": 0, 00:15:52.627 "medium_priority_weight": 0, 00:15:52.627 "nvme_adminq_poll_period_us": 10000, 00:15:52.627 "nvme_error_stat": false, 00:15:52.627 "nvme_ioq_poll_period_us": 0, 00:15:52.627 "rdma_cm_event_timeout_ms": 0, 00:15:52.627 "rdma_max_cq_size": 0, 00:15:52.627 "rdma_srq_size": 0, 00:15:52.627 "reconnect_delay_sec": 0, 00:15:52.627 "timeout_admin_us": 0, 00:15:52.627 "timeout_us": 0, 00:15:52.627 "transport_ack_timeout": 0, 00:15:52.627 "transport_retry_count": 4, 00:15:52.627 "transport_tos": 0 00:15:52.627 } 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "method": "bdev_nvme_set_hotplug", 00:15:52.627 "params": { 00:15:52.627 "enable": false, 00:15:52.627 "period_us": 100000 00:15:52.627 } 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "method": "bdev_malloc_create", 00:15:52.627 "params": { 00:15:52.627 "block_size": 4096, 00:15:52.627 "name": "malloc0", 00:15:52.627 "num_blocks": 8192, 00:15:52.627 "optimal_io_boundary": 0, 00:15:52.627 "physical_block_size": 4096, 00:15:52.627 "uuid": "b047a7bc-47e4-4eca-8624-b283b2c0b7ee" 00:15:52.627 } 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "method": "bdev_wait_for_examine" 00:15:52.627 } 00:15:52.627 ] 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "subsystem": "nbd", 00:15:52.627 "config": [] 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "subsystem": "scheduler", 00:15:52.627 "config": [ 00:15:52.627 { 00:15:52.627 "method": "framework_set_scheduler", 00:15:52.627 "params": { 00:15:52.627 "name": "static" 00:15:52.627 } 00:15:52.627 } 00:15:52.627 ] 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "subsystem": "nvmf", 00:15:52.627 "config": [ 00:15:52.627 { 00:15:52.627 "method": "nvmf_set_config", 00:15:52.627 "params": { 00:15:52.627 "admin_cmd_passthru": { 00:15:52.627 "identify_ctrlr": false 00:15:52.627 }, 00:15:52.627 "discovery_filter": "match_any" 00:15:52.627 } 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "method": "nvmf_set_max_subsystems", 00:15:52.627 "params": { 00:15:52.627 "max_subsystems": 1024 00:15:52.627 } 00:15:52.627 }, 00:15:52.627 { 00:15:52.627 "method": "nvmf_set_crdt", 00:15:52.627 "params": { 00:15:52.628 "crdt1": 0, 00:15:52.628 "crdt2": 0, 00:15:52.628 "crdt3": 0 00:15:52.628 } 00:15:52.628 }, 00:15:52.628 { 00:15:52.628 "method": "nvmf_create_transport", 00:15:52.628 "params": { 00:15:52.628 "abort_timeout_sec": 1, 00:15:52.628 "ack_timeout": 0, 00:15:52.628 "buf_cache_size": 4294967295, 00:15:52.628 "c2h_success": false, 00:15:52.628 "data_wr_pool_size": 0, 00:15:52.628 "dif_insert_or_strip": false, 00:15:52.628 "in_capsule_data_size": 4096, 00:15:52.628 "io_unit_size": 131072, 00:15:52.628 "max_aq_depth": 128, 00:15:52.628 "max_io_qpairs_per_ctrlr": 127, 00:15:52.628 "max_io_size": 131072, 00:15:52.628 "max_queue_depth": 128, 00:15:52.628 "num_shared_buffers": 511, 00:15:52.628 "sock_priority": 0, 00:15:52.628 "trtype": "TCP", 00:15:52.628 "zcopy": false 00:15:52.628 } 00:15:52.628 }, 00:15:52.628 { 00:15:52.628 "method": "nvmf_create_subsystem", 00:15:52.628 "params": { 00:15:52.628 "allow_any_host": false, 00:15:52.628 "ana_reporting": false, 00:15:52.628 "max_cntlid": 65519, 00:15:52.628 "max_namespaces": 10, 00:15:52.628 "min_cntlid": 1, 00:15:52.628 "model_number": "SPDK bdev Controller", 00:15:52.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.628 "serial_number": "SPDK00000000000001" 00:15:52.628 } 00:15:52.628 }, 00:15:52.628 { 00:15:52.628 "method": "nvmf_subsystem_add_host", 00:15:52.628 "params": { 00:15:52.628 "host": "nqn.2016-06.io.spdk:host1", 00:15:52.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.628 "psk": "/tmp/tmp.ga96Pz0MiA" 00:15:52.628 } 00:15:52.628 }, 00:15:52.628 { 00:15:52.628 "method": "nvmf_subsystem_add_ns", 00:15:52.628 "params": { 00:15:52.628 "namespace": { 00:15:52.628 "bdev_name": "malloc0", 00:15:52.628 "nguid": "B047A7BC47E44ECA8624B283B2C0B7EE", 00:15:52.628 "no_auto_visible": false, 00:15:52.628 "nsid": 1, 00:15:52.628 "uuid": "b047a7bc-47e4-4eca-8624-b283b2c0b7ee" 00:15:52.628 }, 00:15:52.628 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:52.628 } 00:15:52.628 }, 00:15:52.628 { 00:15:52.628 "method": "nvmf_subsystem_add_listener", 00:15:52.628 "params": { 00:15:52.628 "listen_address": { 00:15:52.628 "adrfam": "IPv4", 00:15:52.628 "traddr": "10.0.0.2", 00:15:52.628 "trsvcid": "4420", 00:15:52.628 "trtype": "TCP" 00:15:52.628 }, 00:15:52.628 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.628 "secure_channel": true 00:15:52.628 } 00:15:52.628 } 00:15:52.628 ] 00:15:52.628 } 00:15:52.628 ] 00:15:52.628 }' 00:15:52.628 14:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:52.628 14:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.628 14:32:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84746 00:15:52.628 14:32:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84746 00:15:52.628 14:32:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:52.628 14:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84746 ']' 00:15:52.628 14:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.628 14:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.628 14:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.628 14:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.628 14:32:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.628 [2024-07-15 14:32:32.168862] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:15:52.628 [2024-07-15 14:32:32.168966] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.887 [2024-07-15 14:32:32.308690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.887 [2024-07-15 14:32:32.366878] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.887 [2024-07-15 14:32:32.366931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.887 [2024-07-15 14:32:32.366942] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.887 [2024-07-15 14:32:32.366951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.887 [2024-07-15 14:32:32.366959] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.887 [2024-07-15 14:32:32.367045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.146 [2024-07-15 14:32:32.550475] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.146 [2024-07-15 14:32:32.566405] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:53.146 [2024-07-15 14:32:32.582393] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:53.146 [2024-07-15 14:32:32.582616] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.714 14:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.714 14:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:53.714 14:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:53.714 14:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:53.714 14:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.714 14:32:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.714 14:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84790 00:15:53.714 14:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84790 /var/tmp/bdevperf.sock 00:15:53.714 14:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84790 ']' 00:15:53.714 14:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:53.714 14:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:53.715 14:32:33 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:15:53.715 "subsystems": [ 00:15:53.715 { 00:15:53.715 "subsystem": "keyring", 00:15:53.715 "config": [] 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "subsystem": "iobuf", 00:15:53.715 "config": [ 00:15:53.715 { 00:15:53.715 "method": "iobuf_set_options", 00:15:53.715 "params": { 00:15:53.715 "large_bufsize": 135168, 00:15:53.715 "large_pool_count": 1024, 00:15:53.715 "small_bufsize": 8192, 00:15:53.715 "small_pool_count": 8192 00:15:53.715 } 00:15:53.715 } 00:15:53.715 ] 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "subsystem": "sock", 00:15:53.715 "config": [ 00:15:53.715 { 00:15:53.715 "method": "sock_set_default_impl", 00:15:53.715 "params": { 00:15:53.715 "impl_name": "posix" 00:15:53.715 } 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "method": "sock_impl_set_options", 00:15:53.715 "params": { 00:15:53.715 "enable_ktls": false, 00:15:53.715 "enable_placement_id": 0, 00:15:53.715 "enable_quickack": false, 00:15:53.715 "enable_recv_pipe": true, 00:15:53.715 "enable_zerocopy_send_client": false, 00:15:53.715 "enable_zerocopy_send_server": true, 00:15:53.715 "impl_name": "ssl", 00:15:53.715 "recv_buf_size": 4096, 00:15:53.715 "send_buf_size": 4096, 00:15:53.715 "tls_version": 0, 00:15:53.715 "zerocopy_threshold": 0 00:15:53.715 } 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "method": "sock_impl_set_options", 00:15:53.715 "params": { 00:15:53.715 "enable_ktls": false, 00:15:53.715 "enable_placement_id": 0, 00:15:53.715 "enable_quickack": false, 00:15:53.715 "enable_recv_pipe": true, 00:15:53.715 "enable_zerocopy_send_client": false, 00:15:53.715 "enable_zerocopy_send_server": true, 00:15:53.715 "impl_name": "posix", 00:15:53.715 "recv_buf_size": 2097152, 00:15:53.715 "send_buf_size": 2097152, 00:15:53.715 "tls_version": 0, 00:15:53.715 "zerocopy_threshold": 0 00:15:53.715 } 00:15:53.715 } 00:15:53.715 ] 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "subsystem": "vmd", 00:15:53.715 "config": [] 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "subsystem": "accel", 00:15:53.715 "config": [ 00:15:53.715 { 00:15:53.715 "method": "accel_set_options", 00:15:53.715 "params": { 00:15:53.715 "buf_count": 2048, 00:15:53.715 "large_cache_size": 16, 00:15:53.715 "sequence_count": 2048, 00:15:53.715 "small_cache_size": 128, 00:15:53.715 "task_count": 2048 00:15:53.715 } 00:15:53.715 } 00:15:53.715 ] 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "subsystem": "bdev", 00:15:53.715 "config": [ 00:15:53.715 { 00:15:53.715 "method": "bdev_set_options", 00:15:53.715 "params": { 00:15:53.715 "bdev_auto_examine": true, 00:15:53.715 "bdev_io_cache_size": 256, 00:15:53.715 "bdev_io_pool_size": 65535, 00:15:53.715 "iobuf_large_cache_size": 16, 00:15:53.715 "iobuf_small_cache_size": 128 00:15:53.715 } 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "method": "bdev_raid_set_options", 00:15:53.715 "params": { 00:15:53.715 "process_window_size_kb": 1024 00:15:53.715 } 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "method": "bdev_iscsi_set_options", 00:15:53.715 "params": { 00:15:53.715 "timeout_sec": 30 00:15:53.715 } 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "method": "bdev_nvme_set_options", 00:15:53.715 "params": { 00:15:53.715 "action_on_timeout": "none", 00:15:53.715 "allow_accel_sequence": false, 00:15:53.715 "arbitration_burst": 0, 00:15:53.715 "bdev_retry_count": 3, 00:15:53.715 "ctrlr_loss_timeout_sec": 0, 00:15:53.715 "delay_cmd_submit": true, 00:15:53.715 "dhchap_dhgroups": [ 00:15:53.715 "null", 00:15:53.715 "ffdhe2048", 00:15:53.715 "ffdhe3072", 00:15:53.715 "ffdhe4096", 00:15:53.715 "ffdhe6144", 00:15:53.715 "ffdhe8192" 00:15:53.715 ], 00:15:53.715 "dhchap_digests": [ 00:15:53.715 "sha256", 00:15:53.715 "sha384", 00:15:53.715 "sha512" 00:15:53.715 ], 00:15:53.715 "disable_auto_failback": false, 00:15:53.715 "fast_io_fail_timeout_sec": 0, 00:15:53.715 "generate_uuids": false, 00:15:53.715 "high_priority_weight": 0, 00:15:53.715 "io_path_stat": false, 00:15:53.715 "io_queue_requests": 512, 00:15:53.715 "keep_alive_timeout_ms": 10000, 00:15:53.715 "low_priority_weight": 0, 00:15:53.715 "medium_priority_weight": 0, 00:15:53.715 "nvme_adminq_poll_period_us": 10000, 00:15:53.715 "nvme_error_stat": false, 00:15:53.715 "nvme_ioq_poll_period_us": 0, 00:15:53.715 "rdma_cm_event_timeout_ms": 0, 00:15:53.715 "rdma_max_cq_size": 0, 00:15:53.715 "rdma_srq_size": 0, 00:15:53.715 "reconnect_delay_sec": 0, 00:15:53.715 "timeout_admin_us": 0, 00:15:53.715 "timeout_us": 0, 00:15:53.715 "transport_ack_timeout": 0, 00:15:53.715 "transport_retry_count": 4, 00:15:53.715 "transport_tos": 0 00:15:53.715 } 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "method": "bdev_nvme_attach_controller", 00:15:53.715 "params": { 00:15:53.715 "adrfam": "IPv4", 00:15:53.715 "ctrlr_loss_timeout_sec": 0, 00:15:53.715 "ddgst": false, 00:15:53.715 "fast_io_fail_timeout_sec": 0, 00:15:53.715 "hdgst": false, 00:15:53.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:53.715 "name": "TLSTEST", 00:15:53.715 "prchk_guard": false, 00:15:53.715 "prchk_reftag": false, 00:15:53.715 "psk": "/tmp/tmp.ga96Pz0MiA", 00:15:53.715 "reconnect_delay_sec": 0, 00:15:53.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.715 "traddr": "10.0.0.2", 00:15:53.715 "trsvcid": "4420", 00:15:53.715 "trtype": "TCP" 00:15:53.715 } 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "method": "bdev_nvme_set_hotplug", 00:15:53.715 "params": { 00:15:53.715 "enable": false, 00:15:53.715 "period_us": 100000 00:15:53.715 } 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "method": "bdev_wait_for_examine" 00:15:53.715 } 00:15:53.715 ] 00:15:53.715 }, 00:15:53.715 { 00:15:53.715 "subsystem": "nbd", 00:15:53.715 "config": [] 00:15:53.715 } 00:15:53.715 ] 00:15:53.715 }' 00:15:53.715 14:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:53.715 14:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:53.715 14:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.715 14:32:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.715 [2024-07-15 14:32:33.278508] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:15:53.715 [2024-07-15 14:32:33.278600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84790 ] 00:15:53.974 [2024-07-15 14:32:33.413182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.974 [2024-07-15 14:32:33.472472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.232 [2024-07-15 14:32:33.598051] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:54.232 [2024-07-15 14:32:33.598166] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:54.799 14:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.799 14:32:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:54.799 14:32:34 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:54.799 Running I/O for 10 seconds... 00:16:04.776 00:16:04.776 Latency(us) 00:16:04.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.776 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:04.776 Verification LBA range: start 0x0 length 0x2000 00:16:04.776 TLSTESTn1 : 10.02 3946.64 15.42 0.00 0.00 32368.59 7208.96 32887.16 00:16:04.776 =================================================================================================================== 00:16:04.776 Total : 3946.64 15.42 0.00 0.00 32368.59 7208.96 32887.16 00:16:04.776 0 00:16:04.776 14:32:44 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:04.776 14:32:44 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84790 00:16:04.776 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84790 ']' 00:16:04.776 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84790 00:16:04.776 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:04.776 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:04.776 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84790 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:05.034 killing process with pid 84790 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84790' 00:16:05.034 Received shutdown signal, test time was about 10.000000 seconds 00:16:05.034 00:16:05.034 Latency(us) 00:16:05.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.034 =================================================================================================================== 00:16:05.034 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84790 00:16:05.034 [2024-07-15 14:32:44.372353] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84790 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 84746 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84746 ']' 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84746 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84746 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:05.034 killing process with pid 84746 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84746' 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84746 00:16:05.034 [2024-07-15 14:32:44.566161] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:05.034 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84746 00:16:05.292 14:32:44 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:16:05.292 14:32:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:05.292 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:05.292 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.292 14:32:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84937 00:16:05.292 14:32:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:05.292 14:32:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84937 00:16:05.292 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84937 ']' 00:16:05.292 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.292 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.292 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.292 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.292 14:32:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.292 [2024-07-15 14:32:44.796069] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:05.292 [2024-07-15 14:32:44.796172] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.550 [2024-07-15 14:32:44.937543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.550 [2024-07-15 14:32:45.007577] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.550 [2024-07-15 14:32:45.007636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.550 [2024-07-15 14:32:45.007649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.550 [2024-07-15 14:32:45.007659] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.550 [2024-07-15 14:32:45.007668] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.550 [2024-07-15 14:32:45.007712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.484 14:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.484 14:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:06.484 14:32:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:06.484 14:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:06.484 14:32:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.484 14:32:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.484 14:32:45 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ga96Pz0MiA 00:16:06.484 14:32:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ga96Pz0MiA 00:16:06.484 14:32:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:06.742 [2024-07-15 14:32:46.124833] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.742 14:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:07.000 14:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:07.258 [2024-07-15 14:32:46.628940] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:07.258 [2024-07-15 14:32:46.629151] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.258 14:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:07.517 malloc0 00:16:07.517 14:32:46 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:07.775 14:32:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ga96Pz0MiA 00:16:08.033 [2024-07-15 14:32:47.404028] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:08.033 14:32:47 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:08.033 14:32:47 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=85041 00:16:08.033 14:32:47 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:08.033 14:32:47 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 85041 /var/tmp/bdevperf.sock 00:16:08.033 14:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85041 ']' 00:16:08.033 14:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:08.033 14:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:08.033 14:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:08.033 14:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.033 14:32:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:08.034 [2024-07-15 14:32:47.471536] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:08.034 [2024-07-15 14:32:47.471626] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85041 ] 00:16:08.034 [2024-07-15 14:32:47.602677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.292 [2024-07-15 14:32:47.683450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.227 14:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.227 14:32:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:09.227 14:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ga96Pz0MiA 00:16:09.227 14:32:48 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:09.486 [2024-07-15 14:32:48.998090] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:09.486 nvme0n1 00:16:09.744 14:32:49 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:09.744 Running I/O for 1 seconds... 00:16:10.682 00:16:10.682 Latency(us) 00:16:10.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.682 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:10.682 Verification LBA range: start 0x0 length 0x2000 00:16:10.682 nvme0n1 : 1.03 3820.22 14.92 0.00 0.00 33020.82 9889.98 23473.80 00:16:10.682 =================================================================================================================== 00:16:10.682 Total : 3820.22 14.92 0.00 0.00 33020.82 9889.98 23473.80 00:16:10.682 0 00:16:10.682 14:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 85041 00:16:10.682 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85041 ']' 00:16:10.682 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85041 00:16:10.682 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:10.682 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:10.682 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85041 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:10.940 killing process with pid 85041 00:16:10.940 Received shutdown signal, test time was about 1.000000 seconds 00:16:10.940 00:16:10.940 Latency(us) 00:16:10.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.940 =================================================================================================================== 00:16:10.940 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85041' 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85041 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85041 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 84937 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84937 ']' 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84937 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84937 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:10.940 killing process with pid 84937 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84937' 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84937 00:16:10.940 [2024-07-15 14:32:50.471393] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:10.940 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84937 00:16:11.198 14:32:50 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:16:11.198 14:32:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:11.198 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:11.198 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.198 14:32:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85116 00:16:11.198 14:32:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85116 00:16:11.198 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85116 ']' 00:16:11.198 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.198 14:32:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:11.198 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.198 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.198 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.198 14:32:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:11.198 [2024-07-15 14:32:50.691972] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:11.198 [2024-07-15 14:32:50.692054] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.456 [2024-07-15 14:32:50.826666] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.456 [2024-07-15 14:32:50.883158] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.456 [2024-07-15 14:32:50.883210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.456 [2024-07-15 14:32:50.883222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.456 [2024-07-15 14:32:50.883230] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.456 [2024-07-15 14:32:50.883237] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.457 [2024-07-15 14:32:50.883262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.391 14:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.391 14:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:12.391 14:32:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:12.391 14:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:12.392 [2024-07-15 14:32:51.689517] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.392 malloc0 00:16:12.392 [2024-07-15 14:32:51.716059] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:12.392 [2024-07-15 14:32:51.716263] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=85165 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 85165 /var/tmp/bdevperf.sock 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85165 ']' 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.392 14:32:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:12.392 [2024-07-15 14:32:51.792707] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:12.392 [2024-07-15 14:32:51.793181] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85165 ] 00:16:12.392 [2024-07-15 14:32:51.924832] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.651 [2024-07-15 14:32:51.986998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.218 14:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:13.218 14:32:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:13.218 14:32:52 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ga96Pz0MiA 00:16:13.476 14:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:13.734 [2024-07-15 14:32:53.244115] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:13.734 nvme0n1 00:16:13.992 14:32:53 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:13.992 Running I/O for 1 seconds... 00:16:14.926 00:16:14.926 Latency(us) 00:16:14.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.926 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:14.926 Verification LBA range: start 0x0 length 0x2000 00:16:14.926 nvme0n1 : 1.02 3819.61 14.92 0.00 0.00 33115.64 5928.03 34078.72 00:16:14.926 =================================================================================================================== 00:16:14.926 Total : 3819.61 14.92 0.00 0.00 33115.64 5928.03 34078.72 00:16:14.926 0 00:16:14.926 14:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:16:14.926 14:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.926 14:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:15.184 14:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.184 14:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:16:15.184 "subsystems": [ 00:16:15.184 { 00:16:15.184 "subsystem": "keyring", 00:16:15.184 "config": [ 00:16:15.184 { 00:16:15.184 "method": "keyring_file_add_key", 00:16:15.184 "params": { 00:16:15.184 "name": "key0", 00:16:15.184 "path": "/tmp/tmp.ga96Pz0MiA" 00:16:15.184 } 00:16:15.184 } 00:16:15.184 ] 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "subsystem": "iobuf", 00:16:15.184 "config": [ 00:16:15.184 { 00:16:15.184 "method": "iobuf_set_options", 00:16:15.184 "params": { 00:16:15.184 "large_bufsize": 135168, 00:16:15.184 "large_pool_count": 1024, 00:16:15.184 "small_bufsize": 8192, 00:16:15.184 "small_pool_count": 8192 00:16:15.184 } 00:16:15.184 } 00:16:15.184 ] 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "subsystem": "sock", 00:16:15.184 "config": [ 00:16:15.184 { 00:16:15.184 "method": "sock_set_default_impl", 00:16:15.184 "params": { 00:16:15.184 "impl_name": "posix" 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "sock_impl_set_options", 00:16:15.184 "params": { 00:16:15.184 "enable_ktls": false, 00:16:15.184 "enable_placement_id": 0, 00:16:15.184 "enable_quickack": false, 00:16:15.184 "enable_recv_pipe": true, 00:16:15.184 "enable_zerocopy_send_client": false, 00:16:15.184 "enable_zerocopy_send_server": true, 00:16:15.184 "impl_name": "ssl", 00:16:15.184 "recv_buf_size": 4096, 00:16:15.184 "send_buf_size": 4096, 00:16:15.184 "tls_version": 0, 00:16:15.184 "zerocopy_threshold": 0 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "sock_impl_set_options", 00:16:15.184 "params": { 00:16:15.184 "enable_ktls": false, 00:16:15.184 "enable_placement_id": 0, 00:16:15.184 "enable_quickack": false, 00:16:15.184 "enable_recv_pipe": true, 00:16:15.184 "enable_zerocopy_send_client": false, 00:16:15.184 "enable_zerocopy_send_server": true, 00:16:15.184 "impl_name": "posix", 00:16:15.184 "recv_buf_size": 2097152, 00:16:15.184 "send_buf_size": 2097152, 00:16:15.184 "tls_version": 0, 00:16:15.184 "zerocopy_threshold": 0 00:16:15.184 } 00:16:15.184 } 00:16:15.184 ] 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "subsystem": "vmd", 00:16:15.184 "config": [] 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "subsystem": "accel", 00:16:15.184 "config": [ 00:16:15.184 { 00:16:15.184 "method": "accel_set_options", 00:16:15.184 "params": { 00:16:15.184 "buf_count": 2048, 00:16:15.184 "large_cache_size": 16, 00:16:15.184 "sequence_count": 2048, 00:16:15.184 "small_cache_size": 128, 00:16:15.184 "task_count": 2048 00:16:15.184 } 00:16:15.184 } 00:16:15.184 ] 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "subsystem": "bdev", 00:16:15.184 "config": [ 00:16:15.184 { 00:16:15.184 "method": "bdev_set_options", 00:16:15.184 "params": { 00:16:15.184 "bdev_auto_examine": true, 00:16:15.184 "bdev_io_cache_size": 256, 00:16:15.184 "bdev_io_pool_size": 65535, 00:16:15.184 "iobuf_large_cache_size": 16, 00:16:15.184 "iobuf_small_cache_size": 128 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "bdev_raid_set_options", 00:16:15.184 "params": { 00:16:15.184 "process_window_size_kb": 1024 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "bdev_iscsi_set_options", 00:16:15.184 "params": { 00:16:15.184 "timeout_sec": 30 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "bdev_nvme_set_options", 00:16:15.184 "params": { 00:16:15.184 "action_on_timeout": "none", 00:16:15.184 "allow_accel_sequence": false, 00:16:15.184 "arbitration_burst": 0, 00:16:15.184 "bdev_retry_count": 3, 00:16:15.184 "ctrlr_loss_timeout_sec": 0, 00:16:15.184 "delay_cmd_submit": true, 00:16:15.184 "dhchap_dhgroups": [ 00:16:15.184 "null", 00:16:15.184 "ffdhe2048", 00:16:15.184 "ffdhe3072", 00:16:15.184 "ffdhe4096", 00:16:15.184 "ffdhe6144", 00:16:15.184 "ffdhe8192" 00:16:15.184 ], 00:16:15.184 "dhchap_digests": [ 00:16:15.184 "sha256", 00:16:15.184 "sha384", 00:16:15.184 "sha512" 00:16:15.184 ], 00:16:15.184 "disable_auto_failback": false, 00:16:15.184 "fast_io_fail_timeout_sec": 0, 00:16:15.184 "generate_uuids": false, 00:16:15.184 "high_priority_weight": 0, 00:16:15.184 "io_path_stat": false, 00:16:15.184 "io_queue_requests": 0, 00:16:15.184 "keep_alive_timeout_ms": 10000, 00:16:15.184 "low_priority_weight": 0, 00:16:15.184 "medium_priority_weight": 0, 00:16:15.184 "nvme_adminq_poll_period_us": 10000, 00:16:15.184 "nvme_error_stat": false, 00:16:15.184 "nvme_ioq_poll_period_us": 0, 00:16:15.184 "rdma_cm_event_timeout_ms": 0, 00:16:15.184 "rdma_max_cq_size": 0, 00:16:15.184 "rdma_srq_size": 0, 00:16:15.184 "reconnect_delay_sec": 0, 00:16:15.184 "timeout_admin_us": 0, 00:16:15.184 "timeout_us": 0, 00:16:15.184 "transport_ack_timeout": 0, 00:16:15.184 "transport_retry_count": 4, 00:16:15.184 "transport_tos": 0 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "bdev_nvme_set_hotplug", 00:16:15.184 "params": { 00:16:15.184 "enable": false, 00:16:15.184 "period_us": 100000 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "bdev_malloc_create", 00:16:15.184 "params": { 00:16:15.184 "block_size": 4096, 00:16:15.184 "name": "malloc0", 00:16:15.184 "num_blocks": 8192, 00:16:15.184 "optimal_io_boundary": 0, 00:16:15.184 "physical_block_size": 4096, 00:16:15.184 "uuid": "f08e3b68-350d-4fce-8a7a-916b8ab6b1bc" 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "bdev_wait_for_examine" 00:16:15.184 } 00:16:15.184 ] 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "subsystem": "nbd", 00:16:15.184 "config": [] 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "subsystem": "scheduler", 00:16:15.184 "config": [ 00:16:15.184 { 00:16:15.184 "method": "framework_set_scheduler", 00:16:15.184 "params": { 00:16:15.184 "name": "static" 00:16:15.184 } 00:16:15.184 } 00:16:15.184 ] 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "subsystem": "nvmf", 00:16:15.184 "config": [ 00:16:15.184 { 00:16:15.184 "method": "nvmf_set_config", 00:16:15.184 "params": { 00:16:15.184 "admin_cmd_passthru": { 00:16:15.184 "identify_ctrlr": false 00:16:15.184 }, 00:16:15.184 "discovery_filter": "match_any" 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "nvmf_set_max_subsystems", 00:16:15.184 "params": { 00:16:15.184 "max_subsystems": 1024 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "nvmf_set_crdt", 00:16:15.184 "params": { 00:16:15.184 "crdt1": 0, 00:16:15.184 "crdt2": 0, 00:16:15.184 "crdt3": 0 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "nvmf_create_transport", 00:16:15.184 "params": { 00:16:15.184 "abort_timeout_sec": 1, 00:16:15.184 "ack_timeout": 0, 00:16:15.184 "buf_cache_size": 4294967295, 00:16:15.184 "c2h_success": false, 00:16:15.184 "data_wr_pool_size": 0, 00:16:15.184 "dif_insert_or_strip": false, 00:16:15.184 "in_capsule_data_size": 4096, 00:16:15.184 "io_unit_size": 131072, 00:16:15.184 "max_aq_depth": 128, 00:16:15.184 "max_io_qpairs_per_ctrlr": 127, 00:16:15.184 "max_io_size": 131072, 00:16:15.184 "max_queue_depth": 128, 00:16:15.184 "num_shared_buffers": 511, 00:16:15.184 "sock_priority": 0, 00:16:15.184 "trtype": "TCP", 00:16:15.184 "zcopy": false 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "nvmf_create_subsystem", 00:16:15.184 "params": { 00:16:15.184 "allow_any_host": false, 00:16:15.184 "ana_reporting": false, 00:16:15.184 "max_cntlid": 65519, 00:16:15.184 "max_namespaces": 32, 00:16:15.184 "min_cntlid": 1, 00:16:15.184 "model_number": "SPDK bdev Controller", 00:16:15.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.184 "serial_number": "00000000000000000000" 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "nvmf_subsystem_add_host", 00:16:15.184 "params": { 00:16:15.184 "host": "nqn.2016-06.io.spdk:host1", 00:16:15.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.184 "psk": "key0" 00:16:15.184 } 00:16:15.184 }, 00:16:15.184 { 00:16:15.184 "method": "nvmf_subsystem_add_ns", 00:16:15.184 "params": { 00:16:15.184 "namespace": { 00:16:15.184 "bdev_name": "malloc0", 00:16:15.184 "nguid": "F08E3B68350D4FCE8A7A916B8AB6B1BC", 00:16:15.184 "no_auto_visible": false, 00:16:15.185 "nsid": 1, 00:16:15.185 "uuid": "f08e3b68-350d-4fce-8a7a-916b8ab6b1bc" 00:16:15.185 }, 00:16:15.185 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:15.185 } 00:16:15.185 }, 00:16:15.185 { 00:16:15.185 "method": "nvmf_subsystem_add_listener", 00:16:15.185 "params": { 00:16:15.185 "listen_address": { 00:16:15.185 "adrfam": "IPv4", 00:16:15.185 "traddr": "10.0.0.2", 00:16:15.185 "trsvcid": "4420", 00:16:15.185 "trtype": "TCP" 00:16:15.185 }, 00:16:15.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.185 "secure_channel": true 00:16:15.185 } 00:16:15.185 } 00:16:15.185 ] 00:16:15.185 } 00:16:15.185 ] 00:16:15.185 }' 00:16:15.185 14:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:15.443 14:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:16:15.443 "subsystems": [ 00:16:15.443 { 00:16:15.443 "subsystem": "keyring", 00:16:15.443 "config": [ 00:16:15.443 { 00:16:15.443 "method": "keyring_file_add_key", 00:16:15.443 "params": { 00:16:15.443 "name": "key0", 00:16:15.443 "path": "/tmp/tmp.ga96Pz0MiA" 00:16:15.443 } 00:16:15.443 } 00:16:15.443 ] 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "subsystem": "iobuf", 00:16:15.443 "config": [ 00:16:15.443 { 00:16:15.443 "method": "iobuf_set_options", 00:16:15.443 "params": { 00:16:15.443 "large_bufsize": 135168, 00:16:15.443 "large_pool_count": 1024, 00:16:15.443 "small_bufsize": 8192, 00:16:15.443 "small_pool_count": 8192 00:16:15.443 } 00:16:15.443 } 00:16:15.443 ] 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "subsystem": "sock", 00:16:15.443 "config": [ 00:16:15.443 { 00:16:15.443 "method": "sock_set_default_impl", 00:16:15.443 "params": { 00:16:15.443 "impl_name": "posix" 00:16:15.443 } 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "method": "sock_impl_set_options", 00:16:15.443 "params": { 00:16:15.443 "enable_ktls": false, 00:16:15.443 "enable_placement_id": 0, 00:16:15.443 "enable_quickack": false, 00:16:15.443 "enable_recv_pipe": true, 00:16:15.443 "enable_zerocopy_send_client": false, 00:16:15.443 "enable_zerocopy_send_server": true, 00:16:15.443 "impl_name": "ssl", 00:16:15.443 "recv_buf_size": 4096, 00:16:15.443 "send_buf_size": 4096, 00:16:15.443 "tls_version": 0, 00:16:15.443 "zerocopy_threshold": 0 00:16:15.443 } 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "method": "sock_impl_set_options", 00:16:15.443 "params": { 00:16:15.443 "enable_ktls": false, 00:16:15.443 "enable_placement_id": 0, 00:16:15.443 "enable_quickack": false, 00:16:15.443 "enable_recv_pipe": true, 00:16:15.443 "enable_zerocopy_send_client": false, 00:16:15.443 "enable_zerocopy_send_server": true, 00:16:15.443 "impl_name": "posix", 00:16:15.443 "recv_buf_size": 2097152, 00:16:15.443 "send_buf_size": 2097152, 00:16:15.443 "tls_version": 0, 00:16:15.443 "zerocopy_threshold": 0 00:16:15.443 } 00:16:15.443 } 00:16:15.443 ] 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "subsystem": "vmd", 00:16:15.443 "config": [] 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "subsystem": "accel", 00:16:15.443 "config": [ 00:16:15.443 { 00:16:15.443 "method": "accel_set_options", 00:16:15.443 "params": { 00:16:15.443 "buf_count": 2048, 00:16:15.443 "large_cache_size": 16, 00:16:15.443 "sequence_count": 2048, 00:16:15.443 "small_cache_size": 128, 00:16:15.443 "task_count": 2048 00:16:15.443 } 00:16:15.443 } 00:16:15.443 ] 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "subsystem": "bdev", 00:16:15.443 "config": [ 00:16:15.443 { 00:16:15.443 "method": "bdev_set_options", 00:16:15.443 "params": { 00:16:15.443 "bdev_auto_examine": true, 00:16:15.443 "bdev_io_cache_size": 256, 00:16:15.443 "bdev_io_pool_size": 65535, 00:16:15.443 "iobuf_large_cache_size": 16, 00:16:15.443 "iobuf_small_cache_size": 128 00:16:15.443 } 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "method": "bdev_raid_set_options", 00:16:15.443 "params": { 00:16:15.443 "process_window_size_kb": 1024 00:16:15.443 } 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "method": "bdev_iscsi_set_options", 00:16:15.443 "params": { 00:16:15.443 "timeout_sec": 30 00:16:15.443 } 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "method": "bdev_nvme_set_options", 00:16:15.443 "params": { 00:16:15.443 "action_on_timeout": "none", 00:16:15.443 "allow_accel_sequence": false, 00:16:15.443 "arbitration_burst": 0, 00:16:15.443 "bdev_retry_count": 3, 00:16:15.443 "ctrlr_loss_timeout_sec": 0, 00:16:15.443 "delay_cmd_submit": true, 00:16:15.443 "dhchap_dhgroups": [ 00:16:15.443 "null", 00:16:15.443 "ffdhe2048", 00:16:15.443 "ffdhe3072", 00:16:15.443 "ffdhe4096", 00:16:15.443 "ffdhe6144", 00:16:15.443 "ffdhe8192" 00:16:15.443 ], 00:16:15.443 "dhchap_digests": [ 00:16:15.443 "sha256", 00:16:15.443 "sha384", 00:16:15.443 "sha512" 00:16:15.443 ], 00:16:15.443 "disable_auto_failback": false, 00:16:15.443 "fast_io_fail_timeout_sec": 0, 00:16:15.443 "generate_uuids": false, 00:16:15.443 "high_priority_weight": 0, 00:16:15.443 "io_path_stat": false, 00:16:15.443 "io_queue_requests": 512, 00:16:15.443 "keep_alive_timeout_ms": 10000, 00:16:15.443 "low_priority_weight": 0, 00:16:15.443 "medium_priority_weight": 0, 00:16:15.443 "nvme_adminq_poll_period_us": 10000, 00:16:15.443 "nvme_error_stat": false, 00:16:15.443 "nvme_ioq_poll_period_us": 0, 00:16:15.443 "rdma_cm_event_timeout_ms": 0, 00:16:15.443 "rdma_max_cq_size": 0, 00:16:15.443 "rdma_srq_size": 0, 00:16:15.443 "reconnect_delay_sec": 0, 00:16:15.443 "timeout_admin_us": 0, 00:16:15.443 "timeout_us": 0, 00:16:15.443 "transport_ack_timeout": 0, 00:16:15.443 "transport_retry_count": 4, 00:16:15.443 "transport_tos": 0 00:16:15.443 } 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "method": "bdev_nvme_attach_controller", 00:16:15.443 "params": { 00:16:15.443 "adrfam": "IPv4", 00:16:15.443 "ctrlr_loss_timeout_sec": 0, 00:16:15.443 "ddgst": false, 00:16:15.443 "fast_io_fail_timeout_sec": 0, 00:16:15.443 "hdgst": false, 00:16:15.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:15.443 "name": "nvme0", 00:16:15.443 "prchk_guard": false, 00:16:15.443 "prchk_reftag": false, 00:16:15.443 "psk": "key0", 00:16:15.443 "reconnect_delay_sec": 0, 00:16:15.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.443 "traddr": "10.0.0.2", 00:16:15.443 "trsvcid": "4420", 00:16:15.443 "trtype": "TCP" 00:16:15.443 } 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "method": "bdev_nvme_set_hotplug", 00:16:15.443 "params": { 00:16:15.443 "enable": false, 00:16:15.443 "period_us": 100000 00:16:15.443 } 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "method": "bdev_enable_histogram", 00:16:15.443 "params": { 00:16:15.443 "enable": true, 00:16:15.443 "name": "nvme0n1" 00:16:15.443 } 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "method": "bdev_wait_for_examine" 00:16:15.443 } 00:16:15.443 ] 00:16:15.443 }, 00:16:15.443 { 00:16:15.443 "subsystem": "nbd", 00:16:15.443 "config": [] 00:16:15.443 } 00:16:15.443 ] 00:16:15.443 }' 00:16:15.443 14:32:54 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 85165 00:16:15.443 14:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85165 ']' 00:16:15.443 14:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85165 00:16:15.443 14:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:15.443 14:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:15.444 14:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85165 00:16:15.444 14:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:15.444 14:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:15.444 killing process with pid 85165 00:16:15.444 14:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85165' 00:16:15.444 Received shutdown signal, test time was about 1.000000 seconds 00:16:15.444 00:16:15.444 Latency(us) 00:16:15.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:15.444 =================================================================================================================== 00:16:15.444 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:15.444 14:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85165 00:16:15.444 14:32:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85165 00:16:15.701 14:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 85116 00:16:15.701 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85116 ']' 00:16:15.701 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85116 00:16:15.701 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:15.701 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:15.701 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85116 00:16:15.701 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:15.701 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:15.701 killing process with pid 85116 00:16:15.701 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85116' 00:16:15.701 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85116 00:16:15.701 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85116 00:16:15.960 14:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:16:15.960 14:32:55 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:16:15.960 "subsystems": [ 00:16:15.960 { 00:16:15.960 "subsystem": "keyring", 00:16:15.960 "config": [ 00:16:15.960 { 00:16:15.960 "method": "keyring_file_add_key", 00:16:15.960 "params": { 00:16:15.960 "name": "key0", 00:16:15.960 "path": "/tmp/tmp.ga96Pz0MiA" 00:16:15.960 } 00:16:15.960 } 00:16:15.960 ] 00:16:15.960 }, 00:16:15.960 { 00:16:15.960 "subsystem": "iobuf", 00:16:15.960 "config": [ 00:16:15.960 { 00:16:15.960 "method": "iobuf_set_options", 00:16:15.960 "params": { 00:16:15.960 "large_bufsize": 135168, 00:16:15.960 "large_pool_count": 1024, 00:16:15.960 "small_bufsize": 8192, 00:16:15.960 "small_pool_count": 8192 00:16:15.960 } 00:16:15.960 } 00:16:15.960 ] 00:16:15.960 }, 00:16:15.960 { 00:16:15.960 "subsystem": "sock", 00:16:15.960 "config": [ 00:16:15.960 { 00:16:15.960 "method": "sock_set_default_impl", 00:16:15.960 "params": { 00:16:15.960 "impl_name": "posix" 00:16:15.960 } 00:16:15.960 }, 00:16:15.960 { 00:16:15.960 "method": "sock_impl_set_options", 00:16:15.960 "params": { 00:16:15.960 "enable_ktls": false, 00:16:15.960 "enable_placement_id": 0, 00:16:15.960 "enable_quickack": false, 00:16:15.960 "enable_recv_pipe": true, 00:16:15.960 "enable_zerocopy_send_client": false, 00:16:15.960 "enable_zerocopy_send_server": true, 00:16:15.960 "impl_name": "ssl", 00:16:15.960 "recv_buf_size": 4096, 00:16:15.960 "send_buf_size": 4096, 00:16:15.960 "tls_version": 0, 00:16:15.960 "zerocopy_threshold": 0 00:16:15.960 } 00:16:15.960 }, 00:16:15.960 { 00:16:15.960 "method": "sock_impl_set_options", 00:16:15.960 "params": { 00:16:15.960 "enable_ktls": false, 00:16:15.960 "enable_placement_id": 0, 00:16:15.960 "enable_quickack": false, 00:16:15.960 "enable_recv_pipe": true, 00:16:15.960 "enable_zerocopy_send_client": false, 00:16:15.960 "enable_zerocopy_send_server": true, 00:16:15.960 "impl_name": "posix", 00:16:15.960 "recv_buf_size": 2097152, 00:16:15.960 "send_buf_size": 2097152, 00:16:15.960 "tls_version": 0, 00:16:15.960 "zerocopy_threshold": 0 00:16:15.960 } 00:16:15.960 } 00:16:15.960 ] 00:16:15.960 }, 00:16:15.960 { 00:16:15.960 "subsystem": "vmd", 00:16:15.960 "config": [] 00:16:15.960 }, 00:16:15.960 { 00:16:15.960 "subsystem": "accel", 00:16:15.960 "config": [ 00:16:15.960 { 00:16:15.960 "method": "accel_set_options", 00:16:15.960 "params": { 00:16:15.960 "buf_count": 2048, 00:16:15.960 "large_cache_size": 16, 00:16:15.960 "sequence_count": 2048, 00:16:15.960 "small_cache_size": 128, 00:16:15.960 "task_count": 2048 00:16:15.960 } 00:16:15.960 } 00:16:15.960 ] 00:16:15.960 }, 00:16:15.960 { 00:16:15.960 "subsystem": "bdev", 00:16:15.960 "config": [ 00:16:15.960 { 00:16:15.960 "method": "bdev_set_options", 00:16:15.960 "params": { 00:16:15.960 "bdev_auto_examine": true, 00:16:15.961 "bdev_io_cache_size": 256, 00:16:15.961 "bdev_io_pool_size": 65535, 00:16:15.961 "iobuf_large_cache_size": 16, 00:16:15.961 "iobuf_small_cache_size": 128 00:16:15.961 } 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "method": "bdev_raid_set_options", 00:16:15.961 "params": { 00:16:15.961 "process_window_size_kb": 1024 00:16:15.961 } 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "method": "bdev_iscsi_set_options", 00:16:15.961 "params": { 00:16:15.961 "timeout_sec": 30 00:16:15.961 } 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "method": "bdev_nvme_set_options", 00:16:15.961 "params": { 00:16:15.961 "action_on_timeout": "none", 00:16:15.961 "allow_accel_sequence": false, 00:16:15.961 "arbitration_burst": 0, 00:16:15.961 "bdev_retry_count": 3, 00:16:15.961 "ctrlr_loss_timeout_sec": 0, 00:16:15.961 "delay_cmd_submit": true, 00:16:15.961 "dhchap_dhgroups": [ 00:16:15.961 "null", 00:16:15.961 "ffdhe2048", 00:16:15.961 "ffdhe3072", 00:16:15.961 "ffdhe4096", 00:16:15.961 "ffdhe6144", 00:16:15.961 "ffdhe8192" 00:16:15.961 ], 00:16:15.961 "dhchap_digests": [ 00:16:15.961 "sha256", 00:16:15.961 "sha384", 00:16:15.961 "sha512" 00:16:15.961 ], 00:16:15.961 "disable_auto_failback": false, 00:16:15.961 "fast_io_fail_timeout_sec": 0, 00:16:15.961 "generate_uuids": false, 00:16:15.961 "high_priority_weight": 0, 00:16:15.961 "io_path_stat": false, 00:16:15.961 "io_queue_requests": 0, 00:16:15.961 "keep_alive_timeout_ms": 10000, 00:16:15.961 "low_priority_weight": 0, 00:16:15.961 "medium_priority_weight": 0, 00:16:15.961 "nvme_adminq_poll_period_us": 10000, 00:16:15.961 "nvme_error_stat": false, 00:16:15.961 "nvme_ioq_poll_period_us": 0, 00:16:15.961 "rdma_cm_event_timeout_ms": 0, 00:16:15.961 "rdma_max_cq_size": 0, 00:16:15.961 "rdma_srq_size": 0, 00:16:15.961 "reconnect_delay_sec": 0, 00:16:15.961 "timeout_admin_us": 0, 00:16:15.961 "timeout_us": 0, 00:16:15.961 "transport_ack_timeout": 0, 00:16:15.961 "transport_retry_count": 4, 00:16:15.961 "transport_tos": 0 00:16:15.961 } 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "method": "bdev_nvme_set_hotplug", 00:16:15.961 "params": { 00:16:15.961 "enable": false, 00:16:15.961 "period_us": 100000 00:16:15.961 } 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "method": "bdev_malloc_create", 00:16:15.961 "params": { 00:16:15.961 "block_size": 4096, 00:16:15.961 "name": "malloc0", 00:16:15.961 "num_blocks": 8192, 00:16:15.961 "optimal_io_boundary": 0, 00:16:15.961 "physical_block_size": 4096, 00:16:15.961 "uuid": "f08e3b68-350d-4fce-8a7a-916b8ab6b1bc" 00:16:15.961 } 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "method": "bdev_wait_for_examine" 00:16:15.961 } 00:16:15.961 ] 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "subsystem": "nbd", 00:16:15.961 "config": [] 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "subsystem": "scheduler", 00:16:15.961 "config": [ 00:16:15.961 { 00:16:15.961 "method": "framework_set_scheduler", 00:16:15.961 "params": { 00:16:15.961 "name": "static" 00:16:15.961 } 00:16:15.961 } 00:16:15.961 ] 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "subsystem": "nvmf", 00:16:15.961 "config": [ 00:16:15.961 { 00:16:15.961 "method": "nvmf_set_config", 00:16:15.961 "params": { 00:16:15.961 "admin_cmd_passthru": { 00:16:15.961 "identify_ctrlr": false 00:16:15.961 }, 00:16:15.961 "discovery_filter": "match_any" 00:16:15.961 } 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "method": "nvmf_set_max_subsystems", 00:16:15.961 "params": { 00:16:15.961 "max_subsystems": 1024 00:16:15.961 } 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "method": "nvmf_set_crdt", 00:16:15.961 "params": { 00:16:15.961 "crdt1": 0, 00:16:15.961 "crdt2": 0, 00:16:15.961 "crdt3": 0 00:16:15.961 } 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "method": "nvmf_create_transport", 00:16:15.961 "params": { 00:16:15.961 "abort_timeout_sec": 1, 00:16:15.961 "ack_timeout": 0, 00:16:15.961 "buf_cache_size": 4294967295, 00:16:15.961 "c2h_success": false, 00:16:15.961 "data_wr_pool_size": 0, 00:16:15.961 "dif_insert_or_strip": false, 00:16:15.961 "in_capsule_data_size": 4096, 00:16:15.961 "io_unit_size": 131072, 00:16:15.961 "max_aq_depth": 128, 00:16:15.961 "max_io_qpairs_per_ctrlr": 127, 00:16:15.961 "max_io_size": 131072, 00:16:15.961 "max_queue_depth": 128, 00:16:15.961 "num_shared_buffers": 511, 00:16:15.961 "sock_priority": 0, 00:16:15.961 "trtype": "TCP", 00:16:15.961 "zcopy": false 00:16:15.961 } 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "method": "nvmf_create_subsystem", 00:16:15.961 "params": { 00:16:15.961 "allow_any_host": false, 00:16:15.961 "ana_reporting": false, 00:16:15.961 "max_cntlid": 65519, 00:16:15.961 "max_namespaces": 32, 00:16:15.961 "min_cntlid": 1, 00:16:15.961 "model_number": "SPDK bdev Controller", 00:16:15.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.961 "serial_number": "00000000000000000000" 00:16:15.961 } 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "method": "nvmf_subsystem_add_host", 00:16:15.961 "params": { 00:16:15.961 "host": "nqn.2016-06.io.spdk:host1", 00:16:15.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.961 "psk": "key0" 00:16:15.961 } 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "method": "nvmf_subsystem_add_ns", 00:16:15.961 "params": { 00:16:15.961 "namespace": { 00:16:15.961 "bdev_name": "malloc0", 00:16:15.961 "nguid": "F08E3B68350D4FCE8A7A916B8AB6B1BC", 00:16:15.961 "no_auto_visible": false, 00:16:15.961 "nsid": 1, 00:16:15.961 "uuid": "f08e3b68-350d-4fce-8a7a-916b8ab6b1bc" 00:16:15.961 }, 00:16:15.961 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:15.961 } 00:16:15.961 }, 00:16:15.961 { 00:16:15.961 "method": "nvmf_subsystem_add_listener", 00:16:15.961 "params": { 00:16:15.961 "listen_address": { 00:16:15.961 "adrfam": "IPv4", 00:16:15.961 "traddr": "10.0.0.2", 00:16:15.961 "trsvcid": "4420", 00:16:15.961 "trtype": "TCP" 00:16:15.961 }, 00:16:15.961 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:15.961 "secure_channel": true 00:16:15.961 } 00:16:15.961 } 00:16:15.961 ] 00:16:15.961 } 00:16:15.961 ] 00:16:15.961 }' 00:16:15.961 14:32:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:15.961 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:15.961 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:15.961 14:32:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85251 00:16:15.961 14:32:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:15.961 14:32:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85251 00:16:15.961 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85251 ']' 00:16:15.961 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.961 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.961 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.961 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.961 14:32:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:15.961 [2024-07-15 14:32:55.379311] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:15.961 [2024-07-15 14:32:55.379432] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.961 [2024-07-15 14:32:55.513546] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.219 [2024-07-15 14:32:55.570481] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.219 [2024-07-15 14:32:55.570537] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.219 [2024-07-15 14:32:55.570549] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.219 [2024-07-15 14:32:55.570557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.219 [2024-07-15 14:32:55.570564] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.219 [2024-07-15 14:32:55.570644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.219 [2024-07-15 14:32:55.761075] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.219 [2024-07-15 14:32:55.792996] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:16.219 [2024-07-15 14:32:55.793184] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.155 14:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.155 14:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:17.155 14:32:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:17.155 14:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:17.155 14:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:17.155 14:32:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.155 14:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=85305 00:16:17.155 14:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 85305 /var/tmp/bdevperf.sock 00:16:17.155 14:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85305 ']' 00:16:17.155 14:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:17.155 14:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:17.155 14:32:56 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:16:17.155 "subsystems": [ 00:16:17.155 { 00:16:17.155 "subsystem": "keyring", 00:16:17.155 "config": [ 00:16:17.155 { 00:16:17.155 "method": "keyring_file_add_key", 00:16:17.155 "params": { 00:16:17.155 "name": "key0", 00:16:17.155 "path": "/tmp/tmp.ga96Pz0MiA" 00:16:17.155 } 00:16:17.155 } 00:16:17.155 ] 00:16:17.155 }, 00:16:17.155 { 00:16:17.155 "subsystem": "iobuf", 00:16:17.155 "config": [ 00:16:17.155 { 00:16:17.155 "method": "iobuf_set_options", 00:16:17.155 "params": { 00:16:17.155 "large_bufsize": 135168, 00:16:17.155 "large_pool_count": 1024, 00:16:17.155 "small_bufsize": 8192, 00:16:17.155 "small_pool_count": 8192 00:16:17.155 } 00:16:17.155 } 00:16:17.155 ] 00:16:17.155 }, 00:16:17.155 { 00:16:17.155 "subsystem": "sock", 00:16:17.155 "config": [ 00:16:17.155 { 00:16:17.155 "method": "sock_set_default_impl", 00:16:17.155 "params": { 00:16:17.155 "impl_name": "posix" 00:16:17.155 } 00:16:17.155 }, 00:16:17.155 { 00:16:17.155 "method": "sock_impl_set_options", 00:16:17.155 "params": { 00:16:17.155 "enable_ktls": false, 00:16:17.155 "enable_placement_id": 0, 00:16:17.155 "enable_quickack": false, 00:16:17.155 "enable_recv_pipe": true, 00:16:17.155 "enable_zerocopy_send_client": false, 00:16:17.155 "enable_zerocopy_send_server": true, 00:16:17.155 "impl_name": "ssl", 00:16:17.155 "recv_buf_size": 4096, 00:16:17.155 "send_buf_size": 4096, 00:16:17.155 "tls_version": 0, 00:16:17.155 "zerocopy_threshold": 0 00:16:17.155 } 00:16:17.155 }, 00:16:17.155 { 00:16:17.155 "method": "sock_impl_set_options", 00:16:17.155 "params": { 00:16:17.155 "enable_ktls": false, 00:16:17.155 "enable_placement_id": 0, 00:16:17.155 "enable_quickack": false, 00:16:17.156 "enable_recv_pipe": true, 00:16:17.156 "enable_zerocopy_send_client": false, 00:16:17.156 "enable_zerocopy_send_server": true, 00:16:17.156 "impl_name": "posix", 00:16:17.156 "recv_buf_size": 2097152, 00:16:17.156 "send_buf_size": 2097152, 00:16:17.156 "tls_version": 0, 00:16:17.156 "zerocopy_threshold": 0 00:16:17.156 } 00:16:17.156 } 00:16:17.156 ] 00:16:17.156 }, 00:16:17.156 { 00:16:17.156 "subsystem": "vmd", 00:16:17.156 "config": [] 00:16:17.156 }, 00:16:17.156 { 00:16:17.156 "subsystem": "accel", 00:16:17.156 "config": [ 00:16:17.156 { 00:16:17.156 "method": "accel_set_options", 00:16:17.156 "params": { 00:16:17.156 "buf_count": 2048, 00:16:17.156 "large_cache_size": 16, 00:16:17.156 "sequence_count": 2048, 00:16:17.156 "small_cache_size": 128, 00:16:17.156 "task_count": 2048 00:16:17.156 } 00:16:17.156 } 00:16:17.156 ] 00:16:17.156 }, 00:16:17.156 { 00:16:17.156 "subsystem": "bdev", 00:16:17.156 "config": [ 00:16:17.156 { 00:16:17.156 "method": "bdev_set_options", 00:16:17.156 "params": { 00:16:17.156 "bdev_auto_examine": true, 00:16:17.156 "bdev_io_cache_size": 256, 00:16:17.156 "bdev_io_pool_size": 65535, 00:16:17.156 "iobuf_large_cache_size": 16, 00:16:17.156 "iobuf_small_cache_size": 128 00:16:17.156 } 00:16:17.156 }, 00:16:17.156 { 00:16:17.156 "method": "bdev_raid_set_options", 00:16:17.156 "params": { 00:16:17.156 "process_window_size_kb": 1024 00:16:17.156 } 00:16:17.156 }, 00:16:17.156 { 00:16:17.156 "method": "bdev_iscsi_set_options", 00:16:17.156 "params": { 00:16:17.156 "timeout_sec": 30 00:16:17.156 } 00:16:17.156 }, 00:16:17.156 { 00:16:17.156 "method": "bdev_nvme_set_options", 00:16:17.156 "params": { 00:16:17.156 "action_on_timeout": "none", 00:16:17.156 "allow_accel_sequence": false, 00:16:17.156 "arbitration_burst": 0, 00:16:17.156 "bdev_retry_count": 3, 00:16:17.156 "ctrlr_loss_timeout_sec": 0, 00:16:17.156 "delay_cmd_submit": true, 00:16:17.156 "dhchap_dhgroups": [ 00:16:17.156 "null", 00:16:17.156 "ffdhe2048", 00:16:17.156 "ffdhe3072", 00:16:17.156 "ffdhe4096", 00:16:17.156 "ffdhe6144", 00:16:17.156 "ffdhe8192" 00:16:17.156 ], 00:16:17.156 "dhchap_digests": [ 00:16:17.156 "sha256", 00:16:17.156 "sha384", 00:16:17.156 "sha512" 00:16:17.156 ], 00:16:17.156 "disable_auto_failback": false, 00:16:17.156 "fast_io_fail_timeout_sec": 0, 00:16:17.156 "generate_uuids": false, 00:16:17.156 "high_priority_weight": 0, 00:16:17.156 "io_path_stat": false, 00:16:17.156 "io_queue_requests": 512, 00:16:17.156 "keep_alive_timeout_ms": 10000, 00:16:17.156 "low_priority_weight": 0, 00:16:17.156 "medium_priority_weight": 0, 00:16:17.156 "nvme_adminq_poll_period_us": 10000, 00:16:17.156 "nvme_error_stat": false, 00:16:17.156 "nvme_ioq_poll_period_us": 0, 00:16:17.156 "rdma_cm_event_timeout_ms": 0, 00:16:17.156 "rdma_max_cq_size": 0, 00:16:17.156 "rdma_srq_size": 0, 00:16:17.156 "reconnect_delay_sec": 0, 00:16:17.156 "timeout_admin_us": 0, 00:16:17.156 "timeout_us": 0, 00:16:17.156 "transport_ack_timeout": 0, 00:16:17.156 "transport_retry_count": 4, 00:16:17.156 "transport_tos": 0 00:16:17.156 } 00:16:17.156 }, 00:16:17.156 { 00:16:17.156 "method": "bdev_nvme_attach_controller", 00:16:17.156 "params": { 00:16:17.156 "adrfam": "IPv4", 00:16:17.156 "ctrlr_loss_timeout_sec": 0, 00:16:17.156 "ddgst": false, 00:16:17.156 "fast_io_fail_timeout_sec": 0, 00:16:17.156 "hdgst": false, 00:16:17.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:17.156 "name": "nvme0", 00:16:17.156 "prchk_guard": false, 00:16:17.156 "prchk_reftag": false, 00:16:17.156 "psk": "key0", 00:16:17.156 "reconnect_delay_sec": 0, 00:16:17.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.156 "traddr": "10.0.0.2", 00:16:17.156 "trsvcid": "4420", 00:16:17.156 "trtype": "TCP" 00:16:17.156 } 00:16:17.156 }, 00:16:17.156 { 00:16:17.156 "method": "bdev_nvme_set_hotplug", 00:16:17.156 "params": { 00:16:17.156 "enable": false, 00:16:17.156 "period_us": 100000 00:16:17.156 } 00:16:17.156 }, 00:16:17.156 { 00:16:17.156 "method": "bdev_enable_histogram", 00:16:17.156 "params": { 00:16:17.156 "enable": true, 00:16:17.156 "name": "nvme0n1" 00:16:17.156 } 00:16:17.156 }, 00:16:17.156 { 00:16:17.156 "method": "bdev_wait_for_examine" 00:16:17.156 } 00:16:17.156 ] 00:16:17.156 }, 00:16:17.156 { 00:16:17.156 "subsystem": "nbd", 00:16:17.156 "config": [] 00:16:17.156 } 00:16:17.156 ] 00:16:17.156 }' 00:16:17.156 14:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:17.156 14:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:17.156 14:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.156 14:32:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:17.156 [2024-07-15 14:32:56.554726] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:17.156 [2024-07-15 14:32:56.554838] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85305 ] 00:16:17.156 [2024-07-15 14:32:56.693226] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.415 [2024-07-15 14:32:56.752823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.415 [2024-07-15 14:32:56.887369] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:17.983 14:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.983 14:32:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:16:17.983 14:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:16:17.983 14:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:18.551 14:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.551 14:32:57 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:18.552 Running I/O for 1 seconds... 00:16:19.487 00:16:19.487 Latency(us) 00:16:19.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.487 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:19.487 Verification LBA range: start 0x0 length 0x2000 00:16:19.487 nvme0n1 : 1.02 3659.97 14.30 0.00 0.00 34508.59 2546.97 26333.56 00:16:19.487 =================================================================================================================== 00:16:19.487 Total : 3659.97 14.30 0.00 0.00 34508.59 2546.97 26333.56 00:16:19.487 0 00:16:19.487 14:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:16:19.487 14:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:16:19.487 14:32:58 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:19.487 14:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:16:19.487 14:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:16:19.487 14:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:19.487 14:32:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:19.487 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:19.487 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:19.487 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:19.487 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:19.487 nvmf_trace.0 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 85305 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85305 ']' 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85305 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85305 00:16:19.746 killing process with pid 85305 00:16:19.746 Received shutdown signal, test time was about 1.000000 seconds 00:16:19.746 00:16:19.746 Latency(us) 00:16:19.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.746 =================================================================================================================== 00:16:19.746 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85305' 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85305 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85305 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.746 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:19.746 rmmod nvme_tcp 00:16:19.746 rmmod nvme_fabrics 00:16:20.004 rmmod nvme_keyring 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85251 ']' 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85251 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85251 ']' 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85251 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85251 00:16:20.004 killing process with pid 85251 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85251' 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85251 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85251 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:20.004 14:32:59 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.kw3ETFvpl8 /tmp/tmp.N9iDk5oXtw /tmp/tmp.ga96Pz0MiA 00:16:20.264 00:16:20.264 real 1m25.208s 00:16:20.264 user 2m16.283s 00:16:20.264 sys 0m26.627s 00:16:20.264 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:20.264 ************************************ 00:16:20.264 14:32:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:20.264 END TEST nvmf_tls 00:16:20.264 ************************************ 00:16:20.264 14:32:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:20.264 14:32:59 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:20.264 14:32:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:20.264 14:32:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.264 14:32:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:20.264 ************************************ 00:16:20.264 START TEST nvmf_fips 00:16:20.264 ************************************ 00:16:20.264 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:20.264 * Looking for test storage... 00:16:20.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:20.264 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:20.264 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:20.264 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.264 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.264 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.264 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.264 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.264 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:16:20.265 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:16:20.530 Error setting digest 00:16:20.530 00D22B51527F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:16:20.530 00D22B51527F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:20.530 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:20.531 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:20.531 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:20.531 Cannot find device "nvmf_tgt_br" 00:16:20.531 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:16:20.531 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:20.531 Cannot find device "nvmf_tgt_br2" 00:16:20.531 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:16:20.531 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:20.531 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:20.531 Cannot find device "nvmf_tgt_br" 00:16:20.531 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:16:20.531 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:20.531 Cannot find device "nvmf_tgt_br2" 00:16:20.531 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:16:20.531 14:32:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.531 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.531 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:20.531 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:20.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:20.789 00:16:20.789 --- 10.0.0.2 ping statistics --- 00:16:20.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.789 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:20.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:20.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:16:20.789 00:16:20.789 --- 10.0.0.3 ping statistics --- 00:16:20.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.789 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:20.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:20.789 00:16:20.789 --- 10.0.0.1 ping statistics --- 00:16:20.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.789 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:20.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85583 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85583 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85583 ']' 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.789 14:33:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:20.789 [2024-07-15 14:33:00.374292] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:20.789 [2024-07-15 14:33:00.374379] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.047 [2024-07-15 14:33:00.511027] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.047 [2024-07-15 14:33:00.583846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.047 [2024-07-15 14:33:00.583917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.047 [2024-07-15 14:33:00.583940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.047 [2024-07-15 14:33:00.583952] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.047 [2024-07-15 14:33:00.583961] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.047 [2024-07-15 14:33:00.584005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:21.982 14:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.241 [2024-07-15 14:33:01.593677] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.241 [2024-07-15 14:33:01.609620] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:22.241 [2024-07-15 14:33:01.609840] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.241 [2024-07-15 14:33:01.636611] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:22.241 malloc0 00:16:22.241 14:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:22.241 14:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85642 00:16:22.241 14:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:22.241 14:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85642 /var/tmp/bdevperf.sock 00:16:22.241 14:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85642 ']' 00:16:22.241 14:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:22.241 14:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.241 14:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:22.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:22.241 14:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.241 14:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:22.241 [2024-07-15 14:33:01.732451] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:22.241 [2024-07-15 14:33:01.732548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85642 ] 00:16:22.499 [2024-07-15 14:33:01.863248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.499 [2024-07-15 14:33:01.922360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.499 14:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.499 14:33:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:16:22.499 14:33:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:22.759 [2024-07-15 14:33:02.213883] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:22.759 [2024-07-15 14:33:02.214005] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:22.759 TLSTESTn1 00:16:22.759 14:33:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:23.018 Running I/O for 10 seconds... 00:16:32.988 00:16:32.988 Latency(us) 00:16:32.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.988 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:32.988 Verification LBA range: start 0x0 length 0x2000 00:16:32.988 TLSTESTn1 : 10.02 3817.13 14.91 0.00 0.00 33469.00 7387.69 25380.31 00:16:32.988 =================================================================================================================== 00:16:32.988 Total : 3817.13 14.91 0.00 0.00 33469.00 7387.69 25380.31 00:16:32.988 0 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:32.988 nvmf_trace.0 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85642 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85642 ']' 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85642 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85642 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85642' 00:16:32.988 killing process with pid 85642 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85642 00:16:32.988 Received shutdown signal, test time was about 10.000000 seconds 00:16:32.988 00:16:32.988 Latency(us) 00:16:32.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.988 =================================================================================================================== 00:16:32.988 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:32.988 [2024-07-15 14:33:12.563542] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:32.988 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85642 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:33.246 rmmod nvme_tcp 00:16:33.246 rmmod nvme_fabrics 00:16:33.246 rmmod nvme_keyring 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85583 ']' 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85583 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85583 ']' 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85583 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85583 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:33.246 killing process with pid 85583 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85583' 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85583 00:16:33.246 [2024-07-15 14:33:12.836034] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:33.246 14:33:12 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85583 00:16:33.503 14:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:33.503 14:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:33.503 14:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:33.503 14:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.503 14:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:33.503 14:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.503 14:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.503 14:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.503 14:33:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:33.503 14:33:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:33.503 ************************************ 00:16:33.503 END TEST nvmf_fips 00:16:33.503 ************************************ 00:16:33.503 00:16:33.503 real 0m13.390s 00:16:33.503 user 0m17.777s 00:16:33.503 sys 0m5.489s 00:16:33.503 14:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:33.503 14:33:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:33.503 14:33:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:33.503 14:33:13 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:16:33.503 14:33:13 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:16:33.503 14:33:13 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:16:33.503 14:33:13 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:33.503 14:33:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:33.767 14:33:13 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:16:33.767 14:33:13 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:33.767 14:33:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:33.767 14:33:13 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:16:33.767 14:33:13 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:33.767 14:33:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:33.767 14:33:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.767 14:33:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:33.767 ************************************ 00:16:33.767 START TEST nvmf_multicontroller 00:16:33.767 ************************************ 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:33.767 * Looking for test storage... 00:16:33.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:33.767 Cannot find device "nvmf_tgt_br" 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:33.767 Cannot find device "nvmf_tgt_br2" 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:33.767 Cannot find device "nvmf_tgt_br" 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:33.767 Cannot find device "nvmf_tgt_br2" 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:33.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:16:33.767 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:34.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:34.024 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:34.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:34.025 00:16:34.025 --- 10.0.0.2 ping statistics --- 00:16:34.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.025 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:34.025 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:34.025 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:16:34.025 00:16:34.025 --- 10.0.0.3 ping statistics --- 00:16:34.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.025 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:34.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:16:34.025 00:16:34.025 --- 10.0.0.1 ping statistics --- 00:16:34.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.025 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=85986 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 85986 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85986 ']' 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.025 14:33:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:34.282 [2024-07-15 14:33:13.634657] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:34.282 [2024-07-15 14:33:13.634786] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.282 [2024-07-15 14:33:13.770969] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:34.282 [2024-07-15 14:33:13.842125] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.282 [2024-07-15 14:33:13.842391] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.282 [2024-07-15 14:33:13.842576] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.282 [2024-07-15 14:33:13.842643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.282 [2024-07-15 14:33:13.842790] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.282 [2024-07-15 14:33:13.842910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.282 [2024-07-15 14:33:13.843536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.282 [2024-07-15 14:33:13.843556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:35.216 [2024-07-15 14:33:14.645084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:35.216 Malloc0 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:35.216 [2024-07-15 14:33:14.701512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:35.216 [2024-07-15 14:33:14.709436] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:35.216 Malloc1 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:35.216 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86038 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86038 /var/tmp/bdevperf.sock 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 86038 ']' 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.217 14:33:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:36.592 NVMe0n1 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.592 1 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:36.592 2024/07/15 14:33:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:36.592 request: 00:16:36.592 { 00:16:36.592 "method": "bdev_nvme_attach_controller", 00:16:36.592 "params": { 00:16:36.592 "name": "NVMe0", 00:16:36.592 "trtype": "tcp", 00:16:36.592 "traddr": "10.0.0.2", 00:16:36.592 "adrfam": "ipv4", 00:16:36.592 "trsvcid": "4420", 00:16:36.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.592 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:36.592 "hostaddr": "10.0.0.2", 00:16:36.592 "hostsvcid": "60000", 00:16:36.592 "prchk_reftag": false, 00:16:36.592 "prchk_guard": false, 00:16:36.592 "hdgst": false, 00:16:36.592 "ddgst": false 00:16:36.592 } 00:16:36.592 } 00:16:36.592 Got JSON-RPC error response 00:16:36.592 GoRPCClient: error on JSON-RPC call 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:36.592 2024/07/15 14:33:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:36.592 request: 00:16:36.592 { 00:16:36.592 "method": "bdev_nvme_attach_controller", 00:16:36.592 "params": { 00:16:36.592 "name": "NVMe0", 00:16:36.592 "trtype": "tcp", 00:16:36.592 "traddr": "10.0.0.2", 00:16:36.592 "adrfam": "ipv4", 00:16:36.592 "trsvcid": "4420", 00:16:36.592 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:36.592 "hostaddr": "10.0.0.2", 00:16:36.592 "hostsvcid": "60000", 00:16:36.592 "prchk_reftag": false, 00:16:36.592 "prchk_guard": false, 00:16:36.592 "hdgst": false, 00:16:36.592 "ddgst": false 00:16:36.592 } 00:16:36.592 } 00:16:36.592 Got JSON-RPC error response 00:16:36.592 GoRPCClient: error on JSON-RPC call 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.592 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:36.592 2024/07/15 14:33:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:16:36.592 request: 00:16:36.592 { 00:16:36.592 "method": "bdev_nvme_attach_controller", 00:16:36.592 "params": { 00:16:36.592 "name": "NVMe0", 00:16:36.592 "trtype": "tcp", 00:16:36.593 "traddr": "10.0.0.2", 00:16:36.593 "adrfam": "ipv4", 00:16:36.593 "trsvcid": "4420", 00:16:36.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.593 "hostaddr": "10.0.0.2", 00:16:36.593 "hostsvcid": "60000", 00:16:36.593 "prchk_reftag": false, 00:16:36.593 "prchk_guard": false, 00:16:36.593 "hdgst": false, 00:16:36.593 "ddgst": false, 00:16:36.593 "multipath": "disable" 00:16:36.593 } 00:16:36.593 } 00:16:36.593 Got JSON-RPC error response 00:16:36.593 GoRPCClient: error on JSON-RPC call 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:36.593 2024/07/15 14:33:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:36.593 request: 00:16:36.593 { 00:16:36.593 "method": "bdev_nvme_attach_controller", 00:16:36.593 "params": { 00:16:36.593 "name": "NVMe0", 00:16:36.593 "trtype": "tcp", 00:16:36.593 "traddr": "10.0.0.2", 00:16:36.593 "adrfam": "ipv4", 00:16:36.593 "trsvcid": "4420", 00:16:36.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.593 "hostaddr": "10.0.0.2", 00:16:36.593 "hostsvcid": "60000", 00:16:36.593 "prchk_reftag": false, 00:16:36.593 "prchk_guard": false, 00:16:36.593 "hdgst": false, 00:16:36.593 "ddgst": false, 00:16:36.593 "multipath": "failover" 00:16:36.593 } 00:16:36.593 } 00:16:36.593 Got JSON-RPC error response 00:16:36.593 GoRPCClient: error on JSON-RPC call 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.593 14:33:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:36.593 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:36.593 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:36.593 14:33:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:37.970 0 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 86038 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 86038 ']' 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 86038 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86038 00:16:37.971 killing process with pid 86038 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86038' 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 86038 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 86038 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:16:37.971 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:37.971 [2024-07-15 14:33:14.814977] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:37.971 [2024-07-15 14:33:14.815136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86038 ] 00:16:37.971 [2024-07-15 14:33:14.953747] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.971 [2024-07-15 14:33:15.021951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.971 [2024-07-15 14:33:16.081758] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 3bf889c7-c1e6-4916-a07e-93e8eccebc97 already exists 00:16:37.971 [2024-07-15 14:33:16.081867] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:3bf889c7-c1e6-4916-a07e-93e8eccebc97 alias for bdev NVMe1n1 00:16:37.971 [2024-07-15 14:33:16.081903] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:37.971 Running I/O for 1 seconds... 00:16:37.971 00:16:37.971 Latency(us) 00:16:37.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.971 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:37.971 NVMe0n1 : 1.00 19424.43 75.88 0.00 0.00 6578.87 3306.59 11319.85 00:16:37.971 =================================================================================================================== 00:16:37.971 Total : 19424.43 75.88 0.00 0.00 6578.87 3306.59 11319.85 00:16:37.971 Received shutdown signal, test time was about 1.000000 seconds 00:16:37.971 00:16:37.971 Latency(us) 00:16:37.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.971 =================================================================================================================== 00:16:37.971 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:37.971 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:37.971 rmmod nvme_tcp 00:16:37.971 rmmod nvme_fabrics 00:16:37.971 rmmod nvme_keyring 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 85986 ']' 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 85986 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85986 ']' 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85986 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85986 00:16:37.971 killing process with pid 85986 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85986' 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85986 00:16:37.971 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85986 00:16:38.229 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:38.229 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:38.229 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:38.229 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.229 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:38.229 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.229 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.229 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.229 14:33:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:38.229 00:16:38.229 real 0m4.657s 00:16:38.229 user 0m14.839s 00:16:38.229 sys 0m0.902s 00:16:38.229 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:38.229 ************************************ 00:16:38.229 14:33:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:38.229 END TEST nvmf_multicontroller 00:16:38.229 ************************************ 00:16:38.487 14:33:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:38.487 14:33:17 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:38.487 14:33:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:38.487 14:33:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:38.487 14:33:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:38.487 ************************************ 00:16:38.487 START TEST nvmf_aer 00:16:38.487 ************************************ 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:38.487 * Looking for test storage... 00:16:38.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:38.487 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:38.488 Cannot find device "nvmf_tgt_br" 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.488 Cannot find device "nvmf_tgt_br2" 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:38.488 14:33:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:38.488 Cannot find device "nvmf_tgt_br" 00:16:38.488 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:16:38.488 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:38.488 Cannot find device "nvmf_tgt_br2" 00:16:38.488 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:16:38.488 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:38.488 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:38.488 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.488 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:16:38.488 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.488 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:16:38.488 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:38.488 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:38.748 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:38.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:16:38.748 00:16:38.748 --- 10.0.0.2 ping statistics --- 00:16:38.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.749 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:38.749 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:38.749 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:16:38.749 00:16:38.749 --- 10.0.0.3 ping statistics --- 00:16:38.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.749 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:38.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:38.749 00:16:38.749 --- 10.0.0.1 ping statistics --- 00:16:38.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.749 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=86296 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 86296 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 86296 ']' 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.749 14:33:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:39.007 [2024-07-15 14:33:18.344615] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:39.007 [2024-07-15 14:33:18.344720] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.007 [2024-07-15 14:33:18.480831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:39.007 [2024-07-15 14:33:18.569104] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.007 [2024-07-15 14:33:18.569160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.007 [2024-07-15 14:33:18.569172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.007 [2024-07-15 14:33:18.569180] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.007 [2024-07-15 14:33:18.569188] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.008 [2024-07-15 14:33:18.569259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.008 [2024-07-15 14:33:18.569578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.008 [2024-07-15 14:33:18.570056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.008 [2024-07-15 14:33:18.570067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:39.941 [2024-07-15 14:33:19.389753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:39.941 Malloc0 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:39.941 [2024-07-15 14:33:19.454893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:39.941 [ 00:16:39.941 { 00:16:39.941 "allow_any_host": true, 00:16:39.941 "hosts": [], 00:16:39.941 "listen_addresses": [], 00:16:39.941 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:39.941 "subtype": "Discovery" 00:16:39.941 }, 00:16:39.941 { 00:16:39.941 "allow_any_host": true, 00:16:39.941 "hosts": [], 00:16:39.941 "listen_addresses": [ 00:16:39.941 { 00:16:39.941 "adrfam": "IPv4", 00:16:39.941 "traddr": "10.0.0.2", 00:16:39.941 "trsvcid": "4420", 00:16:39.941 "trtype": "TCP" 00:16:39.941 } 00:16:39.941 ], 00:16:39.941 "max_cntlid": 65519, 00:16:39.941 "max_namespaces": 2, 00:16:39.941 "min_cntlid": 1, 00:16:39.941 "model_number": "SPDK bdev Controller", 00:16:39.941 "namespaces": [ 00:16:39.941 { 00:16:39.941 "bdev_name": "Malloc0", 00:16:39.941 "name": "Malloc0", 00:16:39.941 "nguid": "439474D896A948B8A39538153886647F", 00:16:39.941 "nsid": 1, 00:16:39.941 "uuid": "439474d8-96a9-48b8-a395-38153886647f" 00:16:39.941 } 00:16:39.941 ], 00:16:39.941 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:39.941 "serial_number": "SPDK00000000000001", 00:16:39.941 "subtype": "NVMe" 00:16:39.941 } 00:16:39.941 ] 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=86350 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:16:39.941 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.200 Malloc1 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.200 [ 00:16:40.200 { 00:16:40.200 "allow_any_host": true, 00:16:40.200 "hosts": [], 00:16:40.200 "listen_addresses": [], 00:16:40.200 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:40.200 "subtype": "Discovery" 00:16:40.200 }, 00:16:40.200 { 00:16:40.200 "allow_any_host": true, 00:16:40.200 "hosts": [], 00:16:40.200 "listen_addresses": [ 00:16:40.200 { 00:16:40.200 "adrfam": "IPv4", 00:16:40.200 "traddr": "10.0.0.2", 00:16:40.200 "trsvcid": "4420", 00:16:40.200 "trtype": "TCP" 00:16:40.200 } 00:16:40.200 ], 00:16:40.200 "max_cntlid": 65519, 00:16:40.200 "max_namespaces": 2, 00:16:40.200 "min_cntlid": 1, 00:16:40.200 "model_number": "SPDK bdev Controller", 00:16:40.200 "namespaces": [ 00:16:40.200 Asynchronous Event Request test 00:16:40.200 Attaching to 10.0.0.2 00:16:40.200 Attached to 10.0.0.2 00:16:40.200 Registering asynchronous event callbacks... 00:16:40.200 Starting namespace attribute notice tests for all controllers... 00:16:40.200 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:40.200 aer_cb - Changed Namespace 00:16:40.200 Cleaning up... 00:16:40.200 { 00:16:40.200 "bdev_name": "Malloc0", 00:16:40.200 "name": "Malloc0", 00:16:40.200 "nguid": "439474D896A948B8A39538153886647F", 00:16:40.200 "nsid": 1, 00:16:40.200 "uuid": "439474d8-96a9-48b8-a395-38153886647f" 00:16:40.200 }, 00:16:40.200 { 00:16:40.200 "bdev_name": "Malloc1", 00:16:40.200 "name": "Malloc1", 00:16:40.200 "nguid": "D659E56FC47D46648F0793816B9D919D", 00:16:40.200 "nsid": 2, 00:16:40.200 "uuid": "d659e56f-c47d-4664-8f07-93816b9d919d" 00:16:40.200 } 00:16:40.200 ], 00:16:40.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:40.200 "serial_number": "SPDK00000000000001", 00:16:40.200 "subtype": "NVMe" 00:16:40.200 } 00:16:40.200 ] 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 86350 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.200 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:40.458 rmmod nvme_tcp 00:16:40.458 rmmod nvme_fabrics 00:16:40.458 rmmod nvme_keyring 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 86296 ']' 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 86296 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 86296 ']' 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 86296 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86296 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:40.458 killing process with pid 86296 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86296' 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 86296 00:16:40.458 14:33:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 86296 00:16:40.717 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:40.717 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:40.717 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:40.717 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:40.717 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:40.717 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.717 14:33:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.717 14:33:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.717 14:33:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:40.717 00:16:40.717 real 0m2.277s 00:16:40.717 user 0m6.305s 00:16:40.717 sys 0m0.564s 00:16:40.717 14:33:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.717 14:33:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:40.717 ************************************ 00:16:40.717 END TEST nvmf_aer 00:16:40.717 ************************************ 00:16:40.717 14:33:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:40.717 14:33:20 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:40.717 14:33:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:40.718 14:33:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.718 14:33:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:40.718 ************************************ 00:16:40.718 START TEST nvmf_async_init 00:16:40.718 ************************************ 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:40.718 * Looking for test storage... 00:16:40.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f32fe1dab2b843f6aa76fd4baf39a982 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:40.718 Cannot find device "nvmf_tgt_br" 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:40.718 Cannot find device "nvmf_tgt_br2" 00:16:40.718 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:40.977 Cannot find device "nvmf_tgt_br" 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:40.977 Cannot find device "nvmf_tgt_br2" 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:40.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:40.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:40.977 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:41.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:41.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:41.235 00:16:41.235 --- 10.0.0.2 ping statistics --- 00:16:41.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.235 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:41.235 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:41.235 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:41.235 00:16:41.235 --- 10.0.0.3 ping statistics --- 00:16:41.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.235 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:41.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:41.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:41.235 00:16:41.235 --- 10.0.0.1 ping statistics --- 00:16:41.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:41.235 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86521 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86521 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86521 ']' 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:41.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:41.235 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:41.235 [2024-07-15 14:33:20.693853] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:41.236 [2024-07-15 14:33:20.693950] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:41.236 [2024-07-15 14:33:20.827895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.494 [2024-07-15 14:33:20.885852] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.494 [2024-07-15 14:33:20.885909] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.494 [2024-07-15 14:33:20.885921] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.494 [2024-07-15 14:33:20.885938] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.494 [2024-07-15 14:33:20.885947] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.494 [2024-07-15 14:33:20.885972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.494 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.494 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:16:41.494 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:41.494 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:41.494 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:41.494 14:33:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.494 14:33:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:16:41.494 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.494 14:33:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:41.494 [2024-07-15 14:33:21.006453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:41.494 null0 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f32fe1dab2b843f6aa76fd4baf39a982 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:41.494 [2024-07-15 14:33:21.046516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.494 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:41.753 nvme0n1 00:16:41.753 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.753 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:41.753 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.753 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:41.753 [ 00:16:41.753 { 00:16:41.753 "aliases": [ 00:16:41.753 "f32fe1da-b2b8-43f6-aa76-fd4baf39a982" 00:16:41.753 ], 00:16:41.753 "assigned_rate_limits": { 00:16:41.753 "r_mbytes_per_sec": 0, 00:16:41.753 "rw_ios_per_sec": 0, 00:16:41.753 "rw_mbytes_per_sec": 0, 00:16:41.753 "w_mbytes_per_sec": 0 00:16:41.753 }, 00:16:41.753 "block_size": 512, 00:16:41.753 "claimed": false, 00:16:41.753 "driver_specific": { 00:16:41.753 "mp_policy": "active_passive", 00:16:41.753 "nvme": [ 00:16:41.753 { 00:16:41.753 "ctrlr_data": { 00:16:41.753 "ana_reporting": false, 00:16:41.753 "cntlid": 1, 00:16:41.753 "firmware_revision": "24.09", 00:16:41.753 "model_number": "SPDK bdev Controller", 00:16:41.753 "multi_ctrlr": true, 00:16:41.753 "oacs": { 00:16:41.753 "firmware": 0, 00:16:41.753 "format": 0, 00:16:41.753 "ns_manage": 0, 00:16:41.753 "security": 0 00:16:41.753 }, 00:16:41.753 "serial_number": "00000000000000000000", 00:16:41.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:41.753 "vendor_id": "0x8086" 00:16:41.753 }, 00:16:41.753 "ns_data": { 00:16:41.753 "can_share": true, 00:16:41.753 "id": 1 00:16:41.753 }, 00:16:41.753 "trid": { 00:16:41.753 "adrfam": "IPv4", 00:16:41.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:41.753 "traddr": "10.0.0.2", 00:16:41.753 "trsvcid": "4420", 00:16:41.753 "trtype": "TCP" 00:16:41.753 }, 00:16:41.753 "vs": { 00:16:41.753 "nvme_version": "1.3" 00:16:41.753 } 00:16:41.753 } 00:16:41.753 ] 00:16:41.753 }, 00:16:41.753 "memory_domains": [ 00:16:41.753 { 00:16:41.753 "dma_device_id": "system", 00:16:41.753 "dma_device_type": 1 00:16:41.753 } 00:16:41.753 ], 00:16:41.753 "name": "nvme0n1", 00:16:41.753 "num_blocks": 2097152, 00:16:41.753 "product_name": "NVMe disk", 00:16:41.753 "supported_io_types": { 00:16:41.753 "abort": true, 00:16:41.753 "compare": true, 00:16:41.753 "compare_and_write": true, 00:16:41.753 "copy": true, 00:16:41.753 "flush": true, 00:16:41.753 "get_zone_info": false, 00:16:41.753 "nvme_admin": true, 00:16:41.753 "nvme_io": true, 00:16:41.753 "nvme_io_md": false, 00:16:41.753 "nvme_iov_md": false, 00:16:41.753 "read": true, 00:16:41.753 "reset": true, 00:16:41.753 "seek_data": false, 00:16:41.753 "seek_hole": false, 00:16:41.753 "unmap": false, 00:16:41.753 "write": true, 00:16:41.753 "write_zeroes": true, 00:16:41.753 "zcopy": false, 00:16:41.753 "zone_append": false, 00:16:41.753 "zone_management": false 00:16:41.753 }, 00:16:41.753 "uuid": "f32fe1da-b2b8-43f6-aa76-fd4baf39a982", 00:16:41.753 "zoned": false 00:16:41.753 } 00:16:41.753 ] 00:16:41.753 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.753 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:41.753 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.753 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:41.753 [2024-07-15 14:33:21.307240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:41.753 [2024-07-15 14:33:21.307373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1deea30 (9): Bad file descriptor 00:16:42.011 [2024-07-15 14:33:21.449927] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:42.011 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.011 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:42.011 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.011 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:42.011 [ 00:16:42.012 { 00:16:42.012 "aliases": [ 00:16:42.012 "f32fe1da-b2b8-43f6-aa76-fd4baf39a982" 00:16:42.012 ], 00:16:42.012 "assigned_rate_limits": { 00:16:42.012 "r_mbytes_per_sec": 0, 00:16:42.012 "rw_ios_per_sec": 0, 00:16:42.012 "rw_mbytes_per_sec": 0, 00:16:42.012 "w_mbytes_per_sec": 0 00:16:42.012 }, 00:16:42.012 "block_size": 512, 00:16:42.012 "claimed": false, 00:16:42.012 "driver_specific": { 00:16:42.012 "mp_policy": "active_passive", 00:16:42.012 "nvme": [ 00:16:42.012 { 00:16:42.012 "ctrlr_data": { 00:16:42.012 "ana_reporting": false, 00:16:42.012 "cntlid": 2, 00:16:42.012 "firmware_revision": "24.09", 00:16:42.012 "model_number": "SPDK bdev Controller", 00:16:42.012 "multi_ctrlr": true, 00:16:42.012 "oacs": { 00:16:42.012 "firmware": 0, 00:16:42.012 "format": 0, 00:16:42.012 "ns_manage": 0, 00:16:42.012 "security": 0 00:16:42.012 }, 00:16:42.012 "serial_number": "00000000000000000000", 00:16:42.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:42.012 "vendor_id": "0x8086" 00:16:42.012 }, 00:16:42.012 "ns_data": { 00:16:42.012 "can_share": true, 00:16:42.012 "id": 1 00:16:42.012 }, 00:16:42.012 "trid": { 00:16:42.012 "adrfam": "IPv4", 00:16:42.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:42.012 "traddr": "10.0.0.2", 00:16:42.012 "trsvcid": "4420", 00:16:42.012 "trtype": "TCP" 00:16:42.012 }, 00:16:42.012 "vs": { 00:16:42.012 "nvme_version": "1.3" 00:16:42.012 } 00:16:42.012 } 00:16:42.012 ] 00:16:42.012 }, 00:16:42.012 "memory_domains": [ 00:16:42.012 { 00:16:42.012 "dma_device_id": "system", 00:16:42.012 "dma_device_type": 1 00:16:42.012 } 00:16:42.012 ], 00:16:42.012 "name": "nvme0n1", 00:16:42.012 "num_blocks": 2097152, 00:16:42.012 "product_name": "NVMe disk", 00:16:42.012 "supported_io_types": { 00:16:42.012 "abort": true, 00:16:42.012 "compare": true, 00:16:42.012 "compare_and_write": true, 00:16:42.012 "copy": true, 00:16:42.012 "flush": true, 00:16:42.012 "get_zone_info": false, 00:16:42.012 "nvme_admin": true, 00:16:42.012 "nvme_io": true, 00:16:42.012 "nvme_io_md": false, 00:16:42.012 "nvme_iov_md": false, 00:16:42.012 "read": true, 00:16:42.012 "reset": true, 00:16:42.012 "seek_data": false, 00:16:42.012 "seek_hole": false, 00:16:42.012 "unmap": false, 00:16:42.012 "write": true, 00:16:42.012 "write_zeroes": true, 00:16:42.012 "zcopy": false, 00:16:42.012 "zone_append": false, 00:16:42.012 "zone_management": false 00:16:42.012 }, 00:16:42.012 "uuid": "f32fe1da-b2b8-43f6-aa76-fd4baf39a982", 00:16:42.012 "zoned": false 00:16:42.012 } 00:16:42.012 ] 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.l5KgYpzDj2 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.l5KgYpzDj2 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:42.012 [2024-07-15 14:33:21.515433] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:42.012 [2024-07-15 14:33:21.515615] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.l5KgYpzDj2 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:42.012 [2024-07-15 14:33:21.523422] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.l5KgYpzDj2 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:42.012 [2024-07-15 14:33:21.535463] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:42.012 [2024-07-15 14:33:21.535564] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:42.012 nvme0n1 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.012 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:42.271 [ 00:16:42.271 { 00:16:42.271 "aliases": [ 00:16:42.271 "f32fe1da-b2b8-43f6-aa76-fd4baf39a982" 00:16:42.271 ], 00:16:42.271 "assigned_rate_limits": { 00:16:42.271 "r_mbytes_per_sec": 0, 00:16:42.271 "rw_ios_per_sec": 0, 00:16:42.271 "rw_mbytes_per_sec": 0, 00:16:42.271 "w_mbytes_per_sec": 0 00:16:42.271 }, 00:16:42.271 "block_size": 512, 00:16:42.271 "claimed": false, 00:16:42.271 "driver_specific": { 00:16:42.271 "mp_policy": "active_passive", 00:16:42.271 "nvme": [ 00:16:42.271 { 00:16:42.271 "ctrlr_data": { 00:16:42.271 "ana_reporting": false, 00:16:42.271 "cntlid": 3, 00:16:42.271 "firmware_revision": "24.09", 00:16:42.271 "model_number": "SPDK bdev Controller", 00:16:42.271 "multi_ctrlr": true, 00:16:42.271 "oacs": { 00:16:42.271 "firmware": 0, 00:16:42.271 "format": 0, 00:16:42.271 "ns_manage": 0, 00:16:42.271 "security": 0 00:16:42.271 }, 00:16:42.271 "serial_number": "00000000000000000000", 00:16:42.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:42.271 "vendor_id": "0x8086" 00:16:42.271 }, 00:16:42.271 "ns_data": { 00:16:42.271 "can_share": true, 00:16:42.271 "id": 1 00:16:42.271 }, 00:16:42.271 "trid": { 00:16:42.271 "adrfam": "IPv4", 00:16:42.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:42.271 "traddr": "10.0.0.2", 00:16:42.271 "trsvcid": "4421", 00:16:42.271 "trtype": "TCP" 00:16:42.271 }, 00:16:42.271 "vs": { 00:16:42.271 "nvme_version": "1.3" 00:16:42.271 } 00:16:42.271 } 00:16:42.271 ] 00:16:42.271 }, 00:16:42.271 "memory_domains": [ 00:16:42.271 { 00:16:42.271 "dma_device_id": "system", 00:16:42.271 "dma_device_type": 1 00:16:42.271 } 00:16:42.271 ], 00:16:42.271 "name": "nvme0n1", 00:16:42.271 "num_blocks": 2097152, 00:16:42.271 "product_name": "NVMe disk", 00:16:42.271 "supported_io_types": { 00:16:42.271 "abort": true, 00:16:42.271 "compare": true, 00:16:42.271 "compare_and_write": true, 00:16:42.271 "copy": true, 00:16:42.271 "flush": true, 00:16:42.271 "get_zone_info": false, 00:16:42.271 "nvme_admin": true, 00:16:42.271 "nvme_io": true, 00:16:42.271 "nvme_io_md": false, 00:16:42.271 "nvme_iov_md": false, 00:16:42.271 "read": true, 00:16:42.271 "reset": true, 00:16:42.271 "seek_data": false, 00:16:42.271 "seek_hole": false, 00:16:42.271 "unmap": false, 00:16:42.271 "write": true, 00:16:42.271 "write_zeroes": true, 00:16:42.271 "zcopy": false, 00:16:42.271 "zone_append": false, 00:16:42.271 "zone_management": false 00:16:42.271 }, 00:16:42.271 "uuid": "f32fe1da-b2b8-43f6-aa76-fd4baf39a982", 00:16:42.271 "zoned": false 00:16:42.271 } 00:16:42.271 ] 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.l5KgYpzDj2 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.271 rmmod nvme_tcp 00:16:42.271 rmmod nvme_fabrics 00:16:42.271 rmmod nvme_keyring 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86521 ']' 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86521 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86521 ']' 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86521 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86521 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:42.271 killing process with pid 86521 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86521' 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86521 00:16:42.271 [2024-07-15 14:33:21.778998] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:42.271 [2024-07-15 14:33:21.779036] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:42.271 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86521 00:16:42.529 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:42.529 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:42.529 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:42.529 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.529 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:42.529 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.529 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.529 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.529 14:33:21 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:42.529 00:16:42.529 real 0m1.804s 00:16:42.529 user 0m1.468s 00:16:42.529 sys 0m0.520s 00:16:42.529 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:42.529 14:33:21 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:42.529 ************************************ 00:16:42.529 END TEST nvmf_async_init 00:16:42.529 ************************************ 00:16:42.530 14:33:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:42.530 14:33:22 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:42.530 14:33:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:42.530 14:33:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.530 14:33:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:42.530 ************************************ 00:16:42.530 START TEST dma 00:16:42.530 ************************************ 00:16:42.530 14:33:22 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:42.530 * Looking for test storage... 00:16:42.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:42.530 14:33:22 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.530 14:33:22 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.530 14:33:22 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.530 14:33:22 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.530 14:33:22 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.530 14:33:22 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.530 14:33:22 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.530 14:33:22 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:16:42.530 14:33:22 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.530 14:33:22 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.530 14:33:22 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:16:42.530 14:33:22 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:16:42.530 00:16:42.530 real 0m0.092s 00:16:42.530 user 0m0.046s 00:16:42.530 sys 0m0.051s 00:16:42.530 14:33:22 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:42.530 14:33:22 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:16:42.530 ************************************ 00:16:42.530 END TEST dma 00:16:42.530 ************************************ 00:16:42.788 14:33:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:42.788 14:33:22 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:42.788 14:33:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:42.788 14:33:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.788 14:33:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:42.788 ************************************ 00:16:42.788 START TEST nvmf_identify 00:16:42.788 ************************************ 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:42.788 * Looking for test storage... 00:16:42.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:42.788 14:33:22 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:42.789 Cannot find device "nvmf_tgt_br" 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:42.789 Cannot find device "nvmf_tgt_br2" 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:42.789 Cannot find device "nvmf_tgt_br" 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:42.789 Cannot find device "nvmf_tgt_br2" 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:42.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:42.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:42.789 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:43.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:16:43.047 00:16:43.047 --- 10.0.0.2 ping statistics --- 00:16:43.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.047 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:43.047 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:43.047 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:43.047 00:16:43.047 --- 10.0.0.3 ping statistics --- 00:16:43.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.047 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:43.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:43.047 00:16:43.047 --- 10.0.0.1 ping statistics --- 00:16:43.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.047 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:43.047 14:33:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.048 14:33:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:43.048 14:33:22 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86774 00:16:43.048 14:33:22 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:43.048 14:33:22 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:43.048 14:33:22 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86774 00:16:43.048 14:33:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 86774 ']' 00:16:43.048 14:33:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.048 14:33:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.048 14:33:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.048 14:33:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.048 14:33:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:43.307 [2024-07-15 14:33:22.695976] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:43.307 [2024-07-15 14:33:22.696511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.307 [2024-07-15 14:33:22.839524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.565 [2024-07-15 14:33:22.903807] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.565 [2024-07-15 14:33:22.903858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.565 [2024-07-15 14:33:22.903870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.565 [2024-07-15 14:33:22.903878] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.565 [2024-07-15 14:33:22.903885] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.565 [2024-07-15 14:33:22.904010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.565 [2024-07-15 14:33:22.904298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.565 [2024-07-15 14:33:22.906733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.565 [2024-07-15 14:33:22.906745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.499 [2024-07-15 14:33:23.734155] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.499 Malloc0 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.499 [2024-07-15 14:33:23.829760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.499 [ 00:16:44.499 { 00:16:44.499 "allow_any_host": true, 00:16:44.499 "hosts": [], 00:16:44.499 "listen_addresses": [ 00:16:44.499 { 00:16:44.499 "adrfam": "IPv4", 00:16:44.499 "traddr": "10.0.0.2", 00:16:44.499 "trsvcid": "4420", 00:16:44.499 "trtype": "TCP" 00:16:44.499 } 00:16:44.499 ], 00:16:44.499 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:44.499 "subtype": "Discovery" 00:16:44.499 }, 00:16:44.499 { 00:16:44.499 "allow_any_host": true, 00:16:44.499 "hosts": [], 00:16:44.499 "listen_addresses": [ 00:16:44.499 { 00:16:44.499 "adrfam": "IPv4", 00:16:44.499 "traddr": "10.0.0.2", 00:16:44.499 "trsvcid": "4420", 00:16:44.499 "trtype": "TCP" 00:16:44.499 } 00:16:44.499 ], 00:16:44.499 "max_cntlid": 65519, 00:16:44.499 "max_namespaces": 32, 00:16:44.499 "min_cntlid": 1, 00:16:44.499 "model_number": "SPDK bdev Controller", 00:16:44.499 "namespaces": [ 00:16:44.499 { 00:16:44.499 "bdev_name": "Malloc0", 00:16:44.499 "eui64": "ABCDEF0123456789", 00:16:44.499 "name": "Malloc0", 00:16:44.499 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:44.499 "nsid": 1, 00:16:44.499 "uuid": "ab2dc20f-e10e-46e6-ac5b-35d014a0e19c" 00:16:44.499 } 00:16:44.499 ], 00:16:44.499 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.499 "serial_number": "SPDK00000000000001", 00:16:44.499 "subtype": "NVMe" 00:16:44.499 } 00:16:44.499 ] 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.499 14:33:23 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:44.499 [2024-07-15 14:33:23.882184] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:44.499 [2024-07-15 14:33:23.882233] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86827 ] 00:16:44.499 [2024-07-15 14:33:24.021527] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:44.499 [2024-07-15 14:33:24.021596] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:44.499 [2024-07-15 14:33:24.021604] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:44.499 [2024-07-15 14:33:24.021617] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:44.499 [2024-07-15 14:33:24.021624] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:44.499 [2024-07-15 14:33:24.021780] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:44.500 [2024-07-15 14:33:24.021831] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1325a60 0 00:16:44.500 [2024-07-15 14:33:24.028721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:44.500 [2024-07-15 14:33:24.028750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:44.500 [2024-07-15 14:33:24.028756] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:44.500 [2024-07-15 14:33:24.028760] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:44.500 [2024-07-15 14:33:24.028805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.028813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.028818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1325a60) 00:16:44.500 [2024-07-15 14:33:24.028833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:44.500 [2024-07-15 14:33:24.028864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368840, cid 0, qid 0 00:16:44.500 [2024-07-15 14:33:24.036715] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.500 [2024-07-15 14:33:24.036737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.500 [2024-07-15 14:33:24.036742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.036748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368840) on tqpair=0x1325a60 00:16:44.500 [2024-07-15 14:33:24.036763] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:44.500 [2024-07-15 14:33:24.036772] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:44.500 [2024-07-15 14:33:24.036779] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:44.500 [2024-07-15 14:33:24.036796] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.036802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.036806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1325a60) 00:16:44.500 [2024-07-15 14:33:24.036816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.500 [2024-07-15 14:33:24.036845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368840, cid 0, qid 0 00:16:44.500 [2024-07-15 14:33:24.036923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.500 [2024-07-15 14:33:24.036930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.500 [2024-07-15 14:33:24.036934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.036939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368840) on tqpair=0x1325a60 00:16:44.500 [2024-07-15 14:33:24.036945] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:44.500 [2024-07-15 14:33:24.036953] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:44.500 [2024-07-15 14:33:24.036961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.036965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.036969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1325a60) 00:16:44.500 [2024-07-15 14:33:24.036977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.500 [2024-07-15 14:33:24.036997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368840, cid 0, qid 0 00:16:44.500 [2024-07-15 14:33:24.037073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.500 [2024-07-15 14:33:24.037080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.500 [2024-07-15 14:33:24.037084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368840) on tqpair=0x1325a60 00:16:44.500 [2024-07-15 14:33:24.037095] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:44.500 [2024-07-15 14:33:24.037104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:44.500 [2024-07-15 14:33:24.037111] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1325a60) 00:16:44.500 [2024-07-15 14:33:24.037128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.500 [2024-07-15 14:33:24.037146] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368840, cid 0, qid 0 00:16:44.500 [2024-07-15 14:33:24.037224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.500 [2024-07-15 14:33:24.037236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.500 [2024-07-15 14:33:24.037240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037245] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368840) on tqpair=0x1325a60 00:16:44.500 [2024-07-15 14:33:24.037252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:44.500 [2024-07-15 14:33:24.037262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037271] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1325a60) 00:16:44.500 [2024-07-15 14:33:24.037279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.500 [2024-07-15 14:33:24.037299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368840, cid 0, qid 0 00:16:44.500 [2024-07-15 14:33:24.037371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.500 [2024-07-15 14:33:24.037378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.500 [2024-07-15 14:33:24.037381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368840) on tqpair=0x1325a60 00:16:44.500 [2024-07-15 14:33:24.037391] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:44.500 [2024-07-15 14:33:24.037396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:44.500 [2024-07-15 14:33:24.037405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:44.500 [2024-07-15 14:33:24.037511] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:44.500 [2024-07-15 14:33:24.037516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:44.500 [2024-07-15 14:33:24.037526] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037531] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1325a60) 00:16:44.500 [2024-07-15 14:33:24.037543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.500 [2024-07-15 14:33:24.037561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368840, cid 0, qid 0 00:16:44.500 [2024-07-15 14:33:24.037628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.500 [2024-07-15 14:33:24.037635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.500 [2024-07-15 14:33:24.037638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368840) on tqpair=0x1325a60 00:16:44.500 [2024-07-15 14:33:24.037648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:44.500 [2024-07-15 14:33:24.037659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1325a60) 00:16:44.500 [2024-07-15 14:33:24.037675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.500 [2024-07-15 14:33:24.037693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368840, cid 0, qid 0 00:16:44.500 [2024-07-15 14:33:24.037777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.500 [2024-07-15 14:33:24.037785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.500 [2024-07-15 14:33:24.037789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368840) on tqpair=0x1325a60 00:16:44.500 [2024-07-15 14:33:24.037799] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:44.500 [2024-07-15 14:33:24.037804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:44.500 [2024-07-15 14:33:24.037813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:44.500 [2024-07-15 14:33:24.037823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:44.500 [2024-07-15 14:33:24.037834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.037839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1325a60) 00:16:44.500 [2024-07-15 14:33:24.037847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.500 [2024-07-15 14:33:24.037869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368840, cid 0, qid 0 00:16:44.500 [2024-07-15 14:33:24.037989] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:44.500 [2024-07-15 14:33:24.037998] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:44.500 [2024-07-15 14:33:24.038002] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.038007] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1325a60): datao=0, datal=4096, cccid=0 00:16:44.500 [2024-07-15 14:33:24.038012] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1368840) on tqpair(0x1325a60): expected_datao=0, payload_size=4096 00:16:44.500 [2024-07-15 14:33:24.038017] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.038026] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.038031] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.038040] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.500 [2024-07-15 14:33:24.038046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.500 [2024-07-15 14:33:24.038050] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.500 [2024-07-15 14:33:24.038055] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368840) on tqpair=0x1325a60 00:16:44.500 [2024-07-15 14:33:24.038064] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:44.500 [2024-07-15 14:33:24.038069] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:44.500 [2024-07-15 14:33:24.038074] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:44.500 [2024-07-15 14:33:24.038080] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:44.500 [2024-07-15 14:33:24.038085] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:44.501 [2024-07-15 14:33:24.038091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:44.501 [2024-07-15 14:33:24.038100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:44.501 [2024-07-15 14:33:24.038108] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1325a60) 00:16:44.501 [2024-07-15 14:33:24.038125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:44.501 [2024-07-15 14:33:24.038147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368840, cid 0, qid 0 00:16:44.501 [2024-07-15 14:33:24.038244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.501 [2024-07-15 14:33:24.038251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.501 [2024-07-15 14:33:24.038255] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368840) on tqpair=0x1325a60 00:16:44.501 [2024-07-15 14:33:24.038268] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1325a60) 00:16:44.501 [2024-07-15 14:33:24.038283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.501 [2024-07-15 14:33:24.038290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038298] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1325a60) 00:16:44.501 [2024-07-15 14:33:24.038304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.501 [2024-07-15 14:33:24.038311] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1325a60) 00:16:44.501 [2024-07-15 14:33:24.038325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.501 [2024-07-15 14:33:24.038331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.501 [2024-07-15 14:33:24.038346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.501 [2024-07-15 14:33:24.038351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:44.501 [2024-07-15 14:33:24.038365] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:44.501 [2024-07-15 14:33:24.038374] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1325a60) 00:16:44.501 [2024-07-15 14:33:24.038386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.501 [2024-07-15 14:33:24.038407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368840, cid 0, qid 0 00:16:44.501 [2024-07-15 14:33:24.038414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13689c0, cid 1, qid 0 00:16:44.501 [2024-07-15 14:33:24.038420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368b40, cid 2, qid 0 00:16:44.501 [2024-07-15 14:33:24.038425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.501 [2024-07-15 14:33:24.038430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368e40, cid 4, qid 0 00:16:44.501 [2024-07-15 14:33:24.038550] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.501 [2024-07-15 14:33:24.038557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.501 [2024-07-15 14:33:24.038561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368e40) on tqpair=0x1325a60 00:16:44.501 [2024-07-15 14:33:24.038571] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:44.501 [2024-07-15 14:33:24.038580] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:44.501 [2024-07-15 14:33:24.038593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1325a60) 00:16:44.501 [2024-07-15 14:33:24.038606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.501 [2024-07-15 14:33:24.038626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368e40, cid 4, qid 0 00:16:44.501 [2024-07-15 14:33:24.038736] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:44.501 [2024-07-15 14:33:24.038745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:44.501 [2024-07-15 14:33:24.038749] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038753] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1325a60): datao=0, datal=4096, cccid=4 00:16:44.501 [2024-07-15 14:33:24.038758] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1368e40) on tqpair(0x1325a60): expected_datao=0, payload_size=4096 00:16:44.501 [2024-07-15 14:33:24.038763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038770] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038775] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.501 [2024-07-15 14:33:24.038790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.501 [2024-07-15 14:33:24.038794] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368e40) on tqpair=0x1325a60 00:16:44.501 [2024-07-15 14:33:24.038812] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:44.501 [2024-07-15 14:33:24.038843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1325a60) 00:16:44.501 [2024-07-15 14:33:24.038858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.501 [2024-07-15 14:33:24.038867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.038875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1325a60) 00:16:44.501 [2024-07-15 14:33:24.038882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.501 [2024-07-15 14:33:24.038909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368e40, cid 4, qid 0 00:16:44.501 [2024-07-15 14:33:24.038918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368fc0, cid 5, qid 0 00:16:44.501 [2024-07-15 14:33:24.039033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:44.501 [2024-07-15 14:33:24.039041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:44.501 [2024-07-15 14:33:24.039044] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.039048] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1325a60): datao=0, datal=1024, cccid=4 00:16:44.501 [2024-07-15 14:33:24.039053] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1368e40) on tqpair(0x1325a60): expected_datao=0, payload_size=1024 00:16:44.501 [2024-07-15 14:33:24.039058] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.039065] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.039069] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.039076] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.501 [2024-07-15 14:33:24.039082] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.501 [2024-07-15 14:33:24.039086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.039090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368fc0) on tqpair=0x1325a60 00:16:44.501 [2024-07-15 14:33:24.079781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.501 [2024-07-15 14:33:24.079808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.501 [2024-07-15 14:33:24.079814] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.079820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368e40) on tqpair=0x1325a60 00:16:44.501 [2024-07-15 14:33:24.079839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.079845] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1325a60) 00:16:44.501 [2024-07-15 14:33:24.079856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.501 [2024-07-15 14:33:24.079891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368e40, cid 4, qid 0 00:16:44.501 [2024-07-15 14:33:24.079986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:44.501 [2024-07-15 14:33:24.079994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:44.501 [2024-07-15 14:33:24.079998] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.080002] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1325a60): datao=0, datal=3072, cccid=4 00:16:44.501 [2024-07-15 14:33:24.080007] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1368e40) on tqpair(0x1325a60): expected_datao=0, payload_size=3072 00:16:44.501 [2024-07-15 14:33:24.080012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.080021] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.080026] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.080035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.501 [2024-07-15 14:33:24.080041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.501 [2024-07-15 14:33:24.080045] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.080049] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368e40) on tqpair=0x1325a60 00:16:44.501 [2024-07-15 14:33:24.080061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.080065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1325a60) 00:16:44.501 [2024-07-15 14:33:24.080073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.501 [2024-07-15 14:33:24.080100] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368e40, cid 4, qid 0 00:16:44.501 [2024-07-15 14:33:24.080188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:44.501 [2024-07-15 14:33:24.080195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:44.501 [2024-07-15 14:33:24.080199] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:44.501 [2024-07-15 14:33:24.080203] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1325a60): datao=0, datal=8, cccid=4 00:16:44.501 [2024-07-15 14:33:24.080208] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1368e40) on tqpair(0x1325a60): expected_datao=0, payload_size=8 00:16:44.502 [2024-07-15 14:33:24.080213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.502 [2024-07-15 14:33:24.080220] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:44.502 [2024-07-15 14:33:24.080224] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:44.769 ===================================================== 00:16:44.769 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:44.769 ===================================================== 00:16:44.769 Controller Capabilities/Features 00:16:44.769 ================================ 00:16:44.769 Vendor ID: 0000 00:16:44.769 Subsystem Vendor ID: 0000 00:16:44.769 Serial Number: .................... 00:16:44.769 Model Number: ........................................ 00:16:44.769 Firmware Version: 24.09 00:16:44.769 Recommended Arb Burst: 0 00:16:44.769 IEEE OUI Identifier: 00 00 00 00:16:44.769 Multi-path I/O 00:16:44.769 May have multiple subsystem ports: No 00:16:44.769 May have multiple controllers: No 00:16:44.769 Associated with SR-IOV VF: No 00:16:44.769 Max Data Transfer Size: 131072 00:16:44.769 Max Number of Namespaces: 0 00:16:44.769 Max Number of I/O Queues: 1024 00:16:44.769 NVMe Specification Version (VS): 1.3 00:16:44.769 NVMe Specification Version (Identify): 1.3 00:16:44.769 Maximum Queue Entries: 128 00:16:44.769 Contiguous Queues Required: Yes 00:16:44.769 Arbitration Mechanisms Supported 00:16:44.769 Weighted Round Robin: Not Supported 00:16:44.769 Vendor Specific: Not Supported 00:16:44.769 Reset Timeout: 15000 ms 00:16:44.769 Doorbell Stride: 4 bytes 00:16:44.769 NVM Subsystem Reset: Not Supported 00:16:44.769 Command Sets Supported 00:16:44.769 NVM Command Set: Supported 00:16:44.769 Boot Partition: Not Supported 00:16:44.769 Memory Page Size Minimum: 4096 bytes 00:16:44.769 Memory Page Size Maximum: 4096 bytes 00:16:44.769 Persistent Memory Region: Not Supported 00:16:44.769 Optional Asynchronous Events Supported 00:16:44.769 Namespace Attribute Notices: Not Supported 00:16:44.769 Firmware Activation Notices: Not Supported 00:16:44.769 ANA Change Notices: Not Supported 00:16:44.769 PLE Aggregate Log Change Notices: Not Supported 00:16:44.769 LBA Status Info Alert Notices: Not Supported 00:16:44.769 EGE Aggregate Log Change Notices: Not Supported 00:16:44.770 Normal NVM Subsystem Shutdown event: Not Supported 00:16:44.770 Zone Descriptor Change Notices: Not Supported 00:16:44.770 Discovery Log Change Notices: Supported 00:16:44.770 Controller Attributes 00:16:44.770 128-bit Host Identifier: Not Supported 00:16:44.770 Non-Operational Permissive Mode: Not Supported 00:16:44.770 NVM Sets: Not Supported 00:16:44.770 Read Recovery Levels: Not Supported 00:16:44.770 Endurance Groups: Not Supported 00:16:44.770 Predictable Latency Mode: Not Supported 00:16:44.770 Traffic Based Keep ALive: Not Supported 00:16:44.770 Namespace Granularity: Not Supported 00:16:44.770 SQ Associations: Not Supported 00:16:44.770 UUID List: Not Supported 00:16:44.770 Multi-Domain Subsystem: Not Supported 00:16:44.770 Fixed Capacity Management: Not Supported 00:16:44.770 Variable Capacity Management: Not Supported 00:16:44.770 Delete Endurance Group: Not Supported 00:16:44.770 Delete NVM Set: Not Supported 00:16:44.770 Extended LBA Formats Supported: Not Supported 00:16:44.770 Flexible Data Placement Supported: Not Supported 00:16:44.770 00:16:44.770 Controller Memory Buffer Support 00:16:44.770 ================================ 00:16:44.770 Supported: No 00:16:44.770 00:16:44.770 Persistent Memory Region Support 00:16:44.770 ================================ 00:16:44.770 Supported: No 00:16:44.770 00:16:44.770 Admin Command Set Attributes 00:16:44.770 ============================ 00:16:44.770 Security Send/Receive: Not Supported 00:16:44.770 Format NVM: Not Supported 00:16:44.770 Firmware Activate/Download: Not Supported 00:16:44.770 Namespace Management: Not Supported 00:16:44.770 Device Self-Test: Not Supported 00:16:44.770 Directives: Not Supported 00:16:44.770 NVMe-MI: Not Supported 00:16:44.770 Virtualization Management: Not Supported 00:16:44.770 Doorbell Buffer Config: Not Supported 00:16:44.770 Get LBA Status Capability: Not Supported 00:16:44.770 Command & Feature Lockdown Capability: Not Supported 00:16:44.770 Abort Command Limit: 1 00:16:44.770 Async Event Request Limit: 4 00:16:44.770 Number of Firmware Slots: N/A 00:16:44.770 Firmware Slot 1 Read-Only: N/A 00:16:44.770 Firm[2024-07-15 14:33:24.124740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.770 [2024-07-15 14:33:24.124767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.770 [2024-07-15 14:33:24.124772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.770 [2024-07-15 14:33:24.124777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368e40) on tqpair=0x1325a60 00:16:44.770 ware Activation Without Reset: N/A 00:16:44.770 Multiple Update Detection Support: N/A 00:16:44.770 Firmware Update Granularity: No Information Provided 00:16:44.770 Per-Namespace SMART Log: No 00:16:44.770 Asymmetric Namespace Access Log Page: Not Supported 00:16:44.770 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:44.770 Command Effects Log Page: Not Supported 00:16:44.770 Get Log Page Extended Data: Supported 00:16:44.770 Telemetry Log Pages: Not Supported 00:16:44.770 Persistent Event Log Pages: Not Supported 00:16:44.770 Supported Log Pages Log Page: May Support 00:16:44.770 Commands Supported & Effects Log Page: Not Supported 00:16:44.770 Feature Identifiers & Effects Log Page:May Support 00:16:44.770 NVMe-MI Commands & Effects Log Page: May Support 00:16:44.770 Data Area 4 for Telemetry Log: Not Supported 00:16:44.770 Error Log Page Entries Supported: 128 00:16:44.770 Keep Alive: Not Supported 00:16:44.770 00:16:44.770 NVM Command Set Attributes 00:16:44.770 ========================== 00:16:44.770 Submission Queue Entry Size 00:16:44.770 Max: 1 00:16:44.770 Min: 1 00:16:44.770 Completion Queue Entry Size 00:16:44.770 Max: 1 00:16:44.770 Min: 1 00:16:44.770 Number of Namespaces: 0 00:16:44.770 Compare Command: Not Supported 00:16:44.770 Write Uncorrectable Command: Not Supported 00:16:44.770 Dataset Management Command: Not Supported 00:16:44.770 Write Zeroes Command: Not Supported 00:16:44.770 Set Features Save Field: Not Supported 00:16:44.770 Reservations: Not Supported 00:16:44.770 Timestamp: Not Supported 00:16:44.770 Copy: Not Supported 00:16:44.770 Volatile Write Cache: Not Present 00:16:44.770 Atomic Write Unit (Normal): 1 00:16:44.770 Atomic Write Unit (PFail): 1 00:16:44.770 Atomic Compare & Write Unit: 1 00:16:44.770 Fused Compare & Write: Supported 00:16:44.770 Scatter-Gather List 00:16:44.770 SGL Command Set: Supported 00:16:44.770 SGL Keyed: Supported 00:16:44.770 SGL Bit Bucket Descriptor: Not Supported 00:16:44.770 SGL Metadata Pointer: Not Supported 00:16:44.770 Oversized SGL: Not Supported 00:16:44.770 SGL Metadata Address: Not Supported 00:16:44.770 SGL Offset: Supported 00:16:44.770 Transport SGL Data Block: Not Supported 00:16:44.770 Replay Protected Memory Block: Not Supported 00:16:44.770 00:16:44.770 Firmware Slot Information 00:16:44.770 ========================= 00:16:44.770 Active slot: 0 00:16:44.770 00:16:44.770 00:16:44.770 Error Log 00:16:44.770 ========= 00:16:44.770 00:16:44.770 Active Namespaces 00:16:44.770 ================= 00:16:44.770 Discovery Log Page 00:16:44.770 ================== 00:16:44.770 Generation Counter: 2 00:16:44.770 Number of Records: 2 00:16:44.770 Record Format: 0 00:16:44.770 00:16:44.770 Discovery Log Entry 0 00:16:44.770 ---------------------- 00:16:44.770 Transport Type: 3 (TCP) 00:16:44.770 Address Family: 1 (IPv4) 00:16:44.770 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:44.770 Entry Flags: 00:16:44.770 Duplicate Returned Information: 1 00:16:44.770 Explicit Persistent Connection Support for Discovery: 1 00:16:44.770 Transport Requirements: 00:16:44.770 Secure Channel: Not Required 00:16:44.770 Port ID: 0 (0x0000) 00:16:44.770 Controller ID: 65535 (0xffff) 00:16:44.770 Admin Max SQ Size: 128 00:16:44.770 Transport Service Identifier: 4420 00:16:44.770 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:44.770 Transport Address: 10.0.0.2 00:16:44.770 Discovery Log Entry 1 00:16:44.770 ---------------------- 00:16:44.770 Transport Type: 3 (TCP) 00:16:44.770 Address Family: 1 (IPv4) 00:16:44.770 Subsystem Type: 2 (NVM Subsystem) 00:16:44.770 Entry Flags: 00:16:44.770 Duplicate Returned Information: 0 00:16:44.770 Explicit Persistent Connection Support for Discovery: 0 00:16:44.770 Transport Requirements: 00:16:44.770 Secure Channel: Not Required 00:16:44.770 Port ID: 0 (0x0000) 00:16:44.770 Controller ID: 65535 (0xffff) 00:16:44.770 Admin Max SQ Size: 128 00:16:44.770 Transport Service Identifier: 4420 00:16:44.770 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:44.770 Transport Address: 10.0.0.2 [2024-07-15 14:33:24.124923] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:44.770 [2024-07-15 14:33:24.124942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368840) on tqpair=0x1325a60 00:16:44.770 [2024-07-15 14:33:24.124951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.770 [2024-07-15 14:33:24.124958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13689c0) on tqpair=0x1325a60 00:16:44.770 [2024-07-15 14:33:24.124963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.770 [2024-07-15 14:33:24.124969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368b40) on tqpair=0x1325a60 00:16:44.770 [2024-07-15 14:33:24.124974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.770 [2024-07-15 14:33:24.124979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.770 [2024-07-15 14:33:24.124985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.770 [2024-07-15 14:33:24.124997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.770 [2024-07-15 14:33:24.125002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.770 [2024-07-15 14:33:24.125006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.770 [2024-07-15 14:33:24.125016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.770 [2024-07-15 14:33:24.125119] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.770 [2024-07-15 14:33:24.125230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.770 [2024-07-15 14:33:24.125238] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.770 [2024-07-15 14:33:24.125242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.770 [2024-07-15 14:33:24.125247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.770 [2024-07-15 14:33:24.125256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.770 [2024-07-15 14:33:24.125261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.770 [2024-07-15 14:33:24.125265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.770 [2024-07-15 14:33:24.125274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.770 [2024-07-15 14:33:24.125300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.770 [2024-07-15 14:33:24.125409] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.770 [2024-07-15 14:33:24.125416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.770 [2024-07-15 14:33:24.125420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.770 [2024-07-15 14:33:24.125424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.770 [2024-07-15 14:33:24.125430] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:44.771 [2024-07-15 14:33:24.125435] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:44.771 [2024-07-15 14:33:24.125446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.125451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.125455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.771 [2024-07-15 14:33:24.125463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.771 [2024-07-15 14:33:24.125482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.771 [2024-07-15 14:33:24.125558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.771 [2024-07-15 14:33:24.125565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.771 [2024-07-15 14:33:24.125569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.125573] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.771 [2024-07-15 14:33:24.125585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.125590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.125594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.771 [2024-07-15 14:33:24.125601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.771 [2024-07-15 14:33:24.125619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.771 [2024-07-15 14:33:24.125709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.771 [2024-07-15 14:33:24.125718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.771 [2024-07-15 14:33:24.125722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.125726] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.771 [2024-07-15 14:33:24.125738] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.125743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.125747] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.771 [2024-07-15 14:33:24.125754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.771 [2024-07-15 14:33:24.125775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.771 [2024-07-15 14:33:24.125854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.771 [2024-07-15 14:33:24.125861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.771 [2024-07-15 14:33:24.125864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.125869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.771 [2024-07-15 14:33:24.125879] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.125884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.125888] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.771 [2024-07-15 14:33:24.125896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.771 [2024-07-15 14:33:24.125915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.771 [2024-07-15 14:33:24.126011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.771 [2024-07-15 14:33:24.126020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.771 [2024-07-15 14:33:24.126023] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.771 [2024-07-15 14:33:24.126039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.771 [2024-07-15 14:33:24.126055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.771 [2024-07-15 14:33:24.126076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.771 [2024-07-15 14:33:24.126148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.771 [2024-07-15 14:33:24.126155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.771 [2024-07-15 14:33:24.126158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.771 [2024-07-15 14:33:24.126173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.771 [2024-07-15 14:33:24.126190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.771 [2024-07-15 14:33:24.126207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.771 [2024-07-15 14:33:24.126265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.771 [2024-07-15 14:33:24.126272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.771 [2024-07-15 14:33:24.126276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.771 [2024-07-15 14:33:24.126291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126295] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.771 [2024-07-15 14:33:24.126307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.771 [2024-07-15 14:33:24.126325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.771 [2024-07-15 14:33:24.126381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.771 [2024-07-15 14:33:24.126388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.771 [2024-07-15 14:33:24.126391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.771 [2024-07-15 14:33:24.126406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126415] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.771 [2024-07-15 14:33:24.126423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.771 [2024-07-15 14:33:24.126440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.771 [2024-07-15 14:33:24.126496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.771 [2024-07-15 14:33:24.126504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.771 [2024-07-15 14:33:24.126508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.771 [2024-07-15 14:33:24.126523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126531] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.771 [2024-07-15 14:33:24.126539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.771 [2024-07-15 14:33:24.126557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.771 [2024-07-15 14:33:24.126612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.771 [2024-07-15 14:33:24.126620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.771 [2024-07-15 14:33:24.126624] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.771 [2024-07-15 14:33:24.126638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.771 [2024-07-15 14:33:24.126655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.771 [2024-07-15 14:33:24.126672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.771 [2024-07-15 14:33:24.126745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.771 [2024-07-15 14:33:24.126753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.771 [2024-07-15 14:33:24.126757] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.771 [2024-07-15 14:33:24.126773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126778] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.771 [2024-07-15 14:33:24.126789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.771 [2024-07-15 14:33:24.126810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.771 [2024-07-15 14:33:24.126865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.771 [2024-07-15 14:33:24.126872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.771 [2024-07-15 14:33:24.126876] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126880] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.771 [2024-07-15 14:33:24.126891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.771 [2024-07-15 14:33:24.126907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.771 [2024-07-15 14:33:24.126926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.771 [2024-07-15 14:33:24.126979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.771 [2024-07-15 14:33:24.126986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.771 [2024-07-15 14:33:24.126990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.126994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.771 [2024-07-15 14:33:24.127005] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.127009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.771 [2024-07-15 14:33:24.127013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.771 [2024-07-15 14:33:24.127021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.771 [2024-07-15 14:33:24.127039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.772 [2024-07-15 14:33:24.127095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.772 [2024-07-15 14:33:24.127102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.772 [2024-07-15 14:33:24.127105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.772 [2024-07-15 14:33:24.127120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127125] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127129] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.772 [2024-07-15 14:33:24.127136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.772 [2024-07-15 14:33:24.127154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.772 [2024-07-15 14:33:24.127211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.772 [2024-07-15 14:33:24.127218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.772 [2024-07-15 14:33:24.127222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.772 [2024-07-15 14:33:24.127237] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.772 [2024-07-15 14:33:24.127253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.772 [2024-07-15 14:33:24.127271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.772 [2024-07-15 14:33:24.127323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.772 [2024-07-15 14:33:24.127330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.772 [2024-07-15 14:33:24.127334] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.772 [2024-07-15 14:33:24.127349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.772 [2024-07-15 14:33:24.127365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.772 [2024-07-15 14:33:24.127383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.772 [2024-07-15 14:33:24.127440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.772 [2024-07-15 14:33:24.127447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.772 [2024-07-15 14:33:24.127451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.772 [2024-07-15 14:33:24.127466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.772 [2024-07-15 14:33:24.127482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.772 [2024-07-15 14:33:24.127500] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.772 [2024-07-15 14:33:24.127556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.772 [2024-07-15 14:33:24.127563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.772 [2024-07-15 14:33:24.127566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.772 [2024-07-15 14:33:24.127581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.772 [2024-07-15 14:33:24.127597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.772 [2024-07-15 14:33:24.127615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.772 [2024-07-15 14:33:24.127671] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.772 [2024-07-15 14:33:24.127678] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.772 [2024-07-15 14:33:24.127682] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.772 [2024-07-15 14:33:24.127715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.772 [2024-07-15 14:33:24.127735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.772 [2024-07-15 14:33:24.127756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.772 [2024-07-15 14:33:24.127813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.772 [2024-07-15 14:33:24.127820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.772 [2024-07-15 14:33:24.127824] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.772 [2024-07-15 14:33:24.127840] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127844] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127848] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.772 [2024-07-15 14:33:24.127856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.772 [2024-07-15 14:33:24.127874] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.772 [2024-07-15 14:33:24.127928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.772 [2024-07-15 14:33:24.127935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.772 [2024-07-15 14:33:24.127939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.772 [2024-07-15 14:33:24.127954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.127963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.772 [2024-07-15 14:33:24.127971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.772 [2024-07-15 14:33:24.127989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.772 [2024-07-15 14:33:24.128046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.772 [2024-07-15 14:33:24.128053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.772 [2024-07-15 14:33:24.128056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.128061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.772 [2024-07-15 14:33:24.128071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.128076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.128080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.772 [2024-07-15 14:33:24.128087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.772 [2024-07-15 14:33:24.128105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.772 [2024-07-15 14:33:24.128162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.772 [2024-07-15 14:33:24.128169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.772 [2024-07-15 14:33:24.128173] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.128177] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.772 [2024-07-15 14:33:24.128188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.128192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.128196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.772 [2024-07-15 14:33:24.128204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.772 [2024-07-15 14:33:24.128223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.772 [2024-07-15 14:33:24.128277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.772 [2024-07-15 14:33:24.128284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.772 [2024-07-15 14:33:24.128288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.128292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.772 [2024-07-15 14:33:24.128303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.128308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.128312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.772 [2024-07-15 14:33:24.128319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.772 [2024-07-15 14:33:24.128338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.772 [2024-07-15 14:33:24.128391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.772 [2024-07-15 14:33:24.128398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.772 [2024-07-15 14:33:24.128402] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.128406] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.772 [2024-07-15 14:33:24.128417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.128421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.772 [2024-07-15 14:33:24.128425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.772 [2024-07-15 14:33:24.128433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.773 [2024-07-15 14:33:24.128451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.773 [2024-07-15 14:33:24.128508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.773 [2024-07-15 14:33:24.128515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.773 [2024-07-15 14:33:24.128519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.128523] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.773 [2024-07-15 14:33:24.128534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.128538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.128542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.773 [2024-07-15 14:33:24.128550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.773 [2024-07-15 14:33:24.128569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.773 [2024-07-15 14:33:24.128625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.773 [2024-07-15 14:33:24.128632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.773 [2024-07-15 14:33:24.128635] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.128640] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.773 [2024-07-15 14:33:24.128650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.128655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.128659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.773 [2024-07-15 14:33:24.128667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.773 [2024-07-15 14:33:24.128685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.773 [2024-07-15 14:33:24.132716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.773 [2024-07-15 14:33:24.132737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.773 [2024-07-15 14:33:24.132742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.132747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.773 [2024-07-15 14:33:24.132761] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.132767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.132771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1325a60) 00:16:44.773 [2024-07-15 14:33:24.132780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.773 [2024-07-15 14:33:24.132807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1368cc0, cid 3, qid 0 00:16:44.773 [2024-07-15 14:33:24.132870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.773 [2024-07-15 14:33:24.132877] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.773 [2024-07-15 14:33:24.132881] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.132885] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1368cc0) on tqpair=0x1325a60 00:16:44.773 [2024-07-15 14:33:24.132894] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:16:44.773 00:16:44.773 14:33:24 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:44.773 [2024-07-15 14:33:24.172632] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:44.773 [2024-07-15 14:33:24.172678] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86835 ] 00:16:44.773 [2024-07-15 14:33:24.313763] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:44.773 [2024-07-15 14:33:24.313830] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:44.773 [2024-07-15 14:33:24.313838] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:44.773 [2024-07-15 14:33:24.313852] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:44.773 [2024-07-15 14:33:24.313860] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:44.773 [2024-07-15 14:33:24.314013] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:44.773 [2024-07-15 14:33:24.314065] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1550a60 0 00:16:44.773 [2024-07-15 14:33:24.326576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:44.773 [2024-07-15 14:33:24.326602] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:44.773 [2024-07-15 14:33:24.326609] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:44.773 [2024-07-15 14:33:24.326612] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:44.773 [2024-07-15 14:33:24.326658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.326665] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.326669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1550a60) 00:16:44.773 [2024-07-15 14:33:24.326684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:44.773 [2024-07-15 14:33:24.326732] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593840, cid 0, qid 0 00:16:44.773 [2024-07-15 14:33:24.332721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.773 [2024-07-15 14:33:24.332745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.773 [2024-07-15 14:33:24.332750] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.332756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593840) on tqpair=0x1550a60 00:16:44.773 [2024-07-15 14:33:24.332770] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:44.773 [2024-07-15 14:33:24.332779] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:44.773 [2024-07-15 14:33:24.332785] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:44.773 [2024-07-15 14:33:24.332804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.332810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.332814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1550a60) 00:16:44.773 [2024-07-15 14:33:24.332824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.773 [2024-07-15 14:33:24.332856] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593840, cid 0, qid 0 00:16:44.773 [2024-07-15 14:33:24.332926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.773 [2024-07-15 14:33:24.332933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.773 [2024-07-15 14:33:24.332938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.332942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593840) on tqpair=0x1550a60 00:16:44.773 [2024-07-15 14:33:24.332948] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:44.773 [2024-07-15 14:33:24.332957] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:44.773 [2024-07-15 14:33:24.332965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.332970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.332974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1550a60) 00:16:44.773 [2024-07-15 14:33:24.332982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.773 [2024-07-15 14:33:24.333002] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593840, cid 0, qid 0 00:16:44.773 [2024-07-15 14:33:24.333057] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.773 [2024-07-15 14:33:24.333064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.773 [2024-07-15 14:33:24.333068] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.333072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593840) on tqpair=0x1550a60 00:16:44.773 [2024-07-15 14:33:24.333079] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:44.773 [2024-07-15 14:33:24.333088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:44.773 [2024-07-15 14:33:24.333096] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.333101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.333105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1550a60) 00:16:44.773 [2024-07-15 14:33:24.333112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.773 [2024-07-15 14:33:24.333132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593840, cid 0, qid 0 00:16:44.773 [2024-07-15 14:33:24.333190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.773 [2024-07-15 14:33:24.333197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.773 [2024-07-15 14:33:24.333201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.333205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593840) on tqpair=0x1550a60 00:16:44.773 [2024-07-15 14:33:24.333211] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:44.773 [2024-07-15 14:33:24.333222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.333227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.333233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1550a60) 00:16:44.773 [2024-07-15 14:33:24.333241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.773 [2024-07-15 14:33:24.333259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593840, cid 0, qid 0 00:16:44.773 [2024-07-15 14:33:24.333316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.773 [2024-07-15 14:33:24.333323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.773 [2024-07-15 14:33:24.333327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.773 [2024-07-15 14:33:24.333331] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593840) on tqpair=0x1550a60 00:16:44.774 [2024-07-15 14:33:24.333337] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:44.774 [2024-07-15 14:33:24.333343] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:44.774 [2024-07-15 14:33:24.333351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:44.774 [2024-07-15 14:33:24.333458] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:44.774 [2024-07-15 14:33:24.333468] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:44.774 [2024-07-15 14:33:24.333479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.333483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.333488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1550a60) 00:16:44.774 [2024-07-15 14:33:24.333496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.774 [2024-07-15 14:33:24.333516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593840, cid 0, qid 0 00:16:44.774 [2024-07-15 14:33:24.333573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.774 [2024-07-15 14:33:24.333580] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.774 [2024-07-15 14:33:24.333584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.333589] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593840) on tqpair=0x1550a60 00:16:44.774 [2024-07-15 14:33:24.333595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:44.774 [2024-07-15 14:33:24.333606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.333610] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.333615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1550a60) 00:16:44.774 [2024-07-15 14:33:24.333622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.774 [2024-07-15 14:33:24.333641] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593840, cid 0, qid 0 00:16:44.774 [2024-07-15 14:33:24.333710] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.774 [2024-07-15 14:33:24.333720] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.774 [2024-07-15 14:33:24.333724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.333728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593840) on tqpair=0x1550a60 00:16:44.774 [2024-07-15 14:33:24.333735] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:44.774 [2024-07-15 14:33:24.333740] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:44.774 [2024-07-15 14:33:24.333750] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:44.774 [2024-07-15 14:33:24.333762] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:44.774 [2024-07-15 14:33:24.333773] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.333777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1550a60) 00:16:44.774 [2024-07-15 14:33:24.333786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.774 [2024-07-15 14:33:24.333808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593840, cid 0, qid 0 00:16:44.774 [2024-07-15 14:33:24.333907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:44.774 [2024-07-15 14:33:24.333914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:44.774 [2024-07-15 14:33:24.333918] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.333923] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1550a60): datao=0, datal=4096, cccid=0 00:16:44.774 [2024-07-15 14:33:24.333929] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1593840) on tqpair(0x1550a60): expected_datao=0, payload_size=4096 00:16:44.774 [2024-07-15 14:33:24.333945] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.333954] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.333959] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.333968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.774 [2024-07-15 14:33:24.333975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.774 [2024-07-15 14:33:24.333979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.333983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593840) on tqpair=0x1550a60 00:16:44.774 [2024-07-15 14:33:24.333993] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:44.774 [2024-07-15 14:33:24.333998] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:44.774 [2024-07-15 14:33:24.334003] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:44.774 [2024-07-15 14:33:24.334008] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:44.774 [2024-07-15 14:33:24.334013] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:44.774 [2024-07-15 14:33:24.334019] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:44.774 [2024-07-15 14:33:24.334029] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:44.774 [2024-07-15 14:33:24.334037] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.334041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.334045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1550a60) 00:16:44.774 [2024-07-15 14:33:24.334054] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:44.774 [2024-07-15 14:33:24.334076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593840, cid 0, qid 0 00:16:44.774 [2024-07-15 14:33:24.334137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.774 [2024-07-15 14:33:24.334144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.774 [2024-07-15 14:33:24.334149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.334153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593840) on tqpair=0x1550a60 00:16:44.774 [2024-07-15 14:33:24.334161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.334166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.334170] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1550a60) 00:16:44.774 [2024-07-15 14:33:24.334177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.774 [2024-07-15 14:33:24.334184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.334188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.334192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1550a60) 00:16:44.774 [2024-07-15 14:33:24.334198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.774 [2024-07-15 14:33:24.334205] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.334209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.334213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1550a60) 00:16:44.774 [2024-07-15 14:33:24.334219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.774 [2024-07-15 14:33:24.334225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.334229] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.774 [2024-07-15 14:33:24.334233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.774 [2024-07-15 14:33:24.334240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.774 [2024-07-15 14:33:24.334245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:44.774 [2024-07-15 14:33:24.334259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.334267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.334272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1550a60) 00:16:44.775 [2024-07-15 14:33:24.334279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.775 [2024-07-15 14:33:24.334301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593840, cid 0, qid 0 00:16:44.775 [2024-07-15 14:33:24.334309] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15939c0, cid 1, qid 0 00:16:44.775 [2024-07-15 14:33:24.334314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593b40, cid 2, qid 0 00:16:44.775 [2024-07-15 14:33:24.334319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.775 [2024-07-15 14:33:24.334324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593e40, cid 4, qid 0 00:16:44.775 [2024-07-15 14:33:24.334417] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.775 [2024-07-15 14:33:24.334424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.775 [2024-07-15 14:33:24.334428] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.334432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593e40) on tqpair=0x1550a60 00:16:44.775 [2024-07-15 14:33:24.334438] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:44.775 [2024-07-15 14:33:24.334448] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.334457] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.334464] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.334472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.334476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.334480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1550a60) 00:16:44.775 [2024-07-15 14:33:24.334488] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:44.775 [2024-07-15 14:33:24.334508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593e40, cid 4, qid 0 00:16:44.775 [2024-07-15 14:33:24.334566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.775 [2024-07-15 14:33:24.334573] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.775 [2024-07-15 14:33:24.334577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.334581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593e40) on tqpair=0x1550a60 00:16:44.775 [2024-07-15 14:33:24.334646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.334660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.334669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.334673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1550a60) 00:16:44.775 [2024-07-15 14:33:24.334681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.775 [2024-07-15 14:33:24.334715] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593e40, cid 4, qid 0 00:16:44.775 [2024-07-15 14:33:24.334785] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:44.775 [2024-07-15 14:33:24.334793] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:44.775 [2024-07-15 14:33:24.334797] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.334801] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1550a60): datao=0, datal=4096, cccid=4 00:16:44.775 [2024-07-15 14:33:24.334806] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1593e40) on tqpair(0x1550a60): expected_datao=0, payload_size=4096 00:16:44.775 [2024-07-15 14:33:24.334811] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.334819] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.334824] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.334832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.775 [2024-07-15 14:33:24.334839] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.775 [2024-07-15 14:33:24.334843] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.334847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593e40) on tqpair=0x1550a60 00:16:44.775 [2024-07-15 14:33:24.334863] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:44.775 [2024-07-15 14:33:24.334875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.334886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.334895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.334899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1550a60) 00:16:44.775 [2024-07-15 14:33:24.334907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.775 [2024-07-15 14:33:24.334929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593e40, cid 4, qid 0 00:16:44.775 [2024-07-15 14:33:24.335016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:44.775 [2024-07-15 14:33:24.335023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:44.775 [2024-07-15 14:33:24.335027] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335031] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1550a60): datao=0, datal=4096, cccid=4 00:16:44.775 [2024-07-15 14:33:24.335036] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1593e40) on tqpair(0x1550a60): expected_datao=0, payload_size=4096 00:16:44.775 [2024-07-15 14:33:24.335041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335049] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335053] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335062] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.775 [2024-07-15 14:33:24.335068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.775 [2024-07-15 14:33:24.335072] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593e40) on tqpair=0x1550a60 00:16:44.775 [2024-07-15 14:33:24.335092] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.335104] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.335113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1550a60) 00:16:44.775 [2024-07-15 14:33:24.335126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.775 [2024-07-15 14:33:24.335147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593e40, cid 4, qid 0 00:16:44.775 [2024-07-15 14:33:24.335216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:44.775 [2024-07-15 14:33:24.335223] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:44.775 [2024-07-15 14:33:24.335227] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335231] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1550a60): datao=0, datal=4096, cccid=4 00:16:44.775 [2024-07-15 14:33:24.335237] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1593e40) on tqpair(0x1550a60): expected_datao=0, payload_size=4096 00:16:44.775 [2024-07-15 14:33:24.335241] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335249] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335253] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.775 [2024-07-15 14:33:24.335268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.775 [2024-07-15 14:33:24.335272] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593e40) on tqpair=0x1550a60 00:16:44.775 [2024-07-15 14:33:24.335286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.335295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.335306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.335314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.335320] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.335326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.335332] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:44.775 [2024-07-15 14:33:24.335337] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:44.775 [2024-07-15 14:33:24.335343] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:44.775 [2024-07-15 14:33:24.335360] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335366] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1550a60) 00:16:44.775 [2024-07-15 14:33:24.335374] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.775 [2024-07-15 14:33:24.335382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.775 [2024-07-15 14:33:24.335390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1550a60) 00:16:44.775 [2024-07-15 14:33:24.335397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.775 [2024-07-15 14:33:24.335423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593e40, cid 4, qid 0 00:16:44.775 [2024-07-15 14:33:24.335431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593fc0, cid 5, qid 0 00:16:44.775 [2024-07-15 14:33:24.335502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.775 [2024-07-15 14:33:24.335509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.775 [2024-07-15 14:33:24.335513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.335517] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593e40) on tqpair=0x1550a60 00:16:44.776 [2024-07-15 14:33:24.335525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.776 [2024-07-15 14:33:24.335531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.776 [2024-07-15 14:33:24.335535] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.335539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593fc0) on tqpair=0x1550a60 00:16:44.776 [2024-07-15 14:33:24.335550] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.335554] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1550a60) 00:16:44.776 [2024-07-15 14:33:24.335562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.776 [2024-07-15 14:33:24.335581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593fc0, cid 5, qid 0 00:16:44.776 [2024-07-15 14:33:24.335639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.776 [2024-07-15 14:33:24.335647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.776 [2024-07-15 14:33:24.335651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.335656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593fc0) on tqpair=0x1550a60 00:16:44.776 [2024-07-15 14:33:24.335667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.335672] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1550a60) 00:16:44.776 [2024-07-15 14:33:24.335679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.776 [2024-07-15 14:33:24.335710] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593fc0, cid 5, qid 0 00:16:44.776 [2024-07-15 14:33:24.335767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.776 [2024-07-15 14:33:24.335774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.776 [2024-07-15 14:33:24.335778] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.335783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593fc0) on tqpair=0x1550a60 00:16:44.776 [2024-07-15 14:33:24.335794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.335799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1550a60) 00:16:44.776 [2024-07-15 14:33:24.335806] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.776 [2024-07-15 14:33:24.335826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593fc0, cid 5, qid 0 00:16:44.776 [2024-07-15 14:33:24.335884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.776 [2024-07-15 14:33:24.335891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.776 [2024-07-15 14:33:24.335895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.335899] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593fc0) on tqpair=0x1550a60 00:16:44.776 [2024-07-15 14:33:24.335921] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.335927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1550a60) 00:16:44.776 [2024-07-15 14:33:24.335934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.776 [2024-07-15 14:33:24.335943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.335947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1550a60) 00:16:44.776 [2024-07-15 14:33:24.335954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.776 [2024-07-15 14:33:24.335962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.335966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1550a60) 00:16:44.776 [2024-07-15 14:33:24.335972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.776 [2024-07-15 14:33:24.335984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.335988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1550a60) 00:16:44.776 [2024-07-15 14:33:24.335995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.776 [2024-07-15 14:33:24.336016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593fc0, cid 5, qid 0 00:16:44.776 [2024-07-15 14:33:24.336024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593e40, cid 4, qid 0 00:16:44.776 [2024-07-15 14:33:24.336029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1594140, cid 6, qid 0 00:16:44.776 [2024-07-15 14:33:24.336035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15942c0, cid 7, qid 0 00:16:44.776 [2024-07-15 14:33:24.336176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:44.776 [2024-07-15 14:33:24.336189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:44.776 [2024-07-15 14:33:24.336193] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336197] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1550a60): datao=0, datal=8192, cccid=5 00:16:44.776 [2024-07-15 14:33:24.336203] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1593fc0) on tqpair(0x1550a60): expected_datao=0, payload_size=8192 00:16:44.776 [2024-07-15 14:33:24.336208] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336225] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336230] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:44.776 [2024-07-15 14:33:24.336243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:44.776 [2024-07-15 14:33:24.336247] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336251] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1550a60): datao=0, datal=512, cccid=4 00:16:44.776 [2024-07-15 14:33:24.336256] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1593e40) on tqpair(0x1550a60): expected_datao=0, payload_size=512 00:16:44.776 [2024-07-15 14:33:24.336261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336268] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336271] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:44.776 [2024-07-15 14:33:24.336284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:44.776 [2024-07-15 14:33:24.336288] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336292] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1550a60): datao=0, datal=512, cccid=6 00:16:44.776 [2024-07-15 14:33:24.336297] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1594140) on tqpair(0x1550a60): expected_datao=0, payload_size=512 00:16:44.776 [2024-07-15 14:33:24.336301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336308] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336312] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336318] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:44.776 [2024-07-15 14:33:24.336324] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:44.776 [2024-07-15 14:33:24.336328] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336332] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1550a60): datao=0, datal=4096, cccid=7 00:16:44.776 [2024-07-15 14:33:24.336336] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15942c0) on tqpair(0x1550a60): expected_datao=0, payload_size=4096 00:16:44.776 [2024-07-15 14:33:24.336341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336348] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336352] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336361] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.776 [2024-07-15 14:33:24.336367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.776 [2024-07-15 14:33:24.336371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336375] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593fc0) on tqpair=0x1550a60 00:16:44.776 [2024-07-15 14:33:24.336393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.776 [2024-07-15 14:33:24.336400] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.776 [2024-07-15 14:33:24.336404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.776 [2024-07-15 14:33:24.336408] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593e40) on tqpair=0x1550a60 00:16:44.776 ===================================================== 00:16:44.776 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:44.776 ===================================================== 00:16:44.776 Controller Capabilities/Features 00:16:44.776 ================================ 00:16:44.776 Vendor ID: 8086 00:16:44.776 Subsystem Vendor ID: 8086 00:16:44.776 Serial Number: SPDK00000000000001 00:16:44.776 Model Number: SPDK bdev Controller 00:16:44.776 Firmware Version: 24.09 00:16:44.776 Recommended Arb Burst: 6 00:16:44.776 IEEE OUI Identifier: e4 d2 5c 00:16:44.776 Multi-path I/O 00:16:44.776 May have multiple subsystem ports: Yes 00:16:44.776 May have multiple controllers: Yes 00:16:44.776 Associated with SR-IOV VF: No 00:16:44.776 Max Data Transfer Size: 131072 00:16:44.776 Max Number of Namespaces: 32 00:16:44.776 Max Number of I/O Queues: 127 00:16:44.776 NVMe Specification Version (VS): 1.3 00:16:44.776 NVMe Specification Version (Identify): 1.3 00:16:44.776 Maximum Queue Entries: 128 00:16:44.776 Contiguous Queues Required: Yes 00:16:44.776 Arbitration Mechanisms Supported 00:16:44.776 Weighted Round Robin: Not Supported 00:16:44.776 Vendor Specific: Not Supported 00:16:44.776 Reset Timeout: 15000 ms 00:16:44.776 Doorbell Stride: 4 bytes 00:16:44.776 NVM Subsystem Reset: Not Supported 00:16:44.776 Command Sets Supported 00:16:44.776 NVM Command Set: Supported 00:16:44.776 Boot Partition: Not Supported 00:16:44.776 Memory Page Size Minimum: 4096 bytes 00:16:44.776 Memory Page Size Maximum: 4096 bytes 00:16:44.776 Persistent Memory Region: Not Supported 00:16:44.776 Optional Asynchronous Events Supported 00:16:44.776 Namespace Attribute Notices: Supported 00:16:44.776 Firmware Activation Notices: Not Supported 00:16:44.776 ANA Change Notices: Not Supported 00:16:44.776 PLE Aggregate Log Change Notices: Not Supported 00:16:44.776 LBA Status Info Alert Notices: Not Supported 00:16:44.777 EGE Aggregate Log Change Notices: Not Supported 00:16:44.777 Normal NVM Subsystem Shutdown event: Not Supported 00:16:44.777 Zone Descriptor Change Notices: Not Supported 00:16:44.777 Discovery Log Change Notices: Not Supported 00:16:44.777 Controller Attributes 00:16:44.777 128-bit Host Identifier: Supported 00:16:44.777 Non-Operational Permissive Mode: Not Supported 00:16:44.777 NVM Sets: Not Supported 00:16:44.777 Read Recovery Levels: Not Supported 00:16:44.777 Endurance Groups: Not Supported 00:16:44.777 Predictable Latency Mode: Not Supported 00:16:44.777 Traffic Based Keep ALive: Not Supported 00:16:44.777 Namespace Granularity: Not Supported 00:16:44.777 SQ Associations: Not Supported 00:16:44.777 UUID List: Not Supported 00:16:44.777 Multi-Domain Subsystem: Not Supported 00:16:44.777 Fixed Capacity Management: Not Supported 00:16:44.777 Variable Capacity Management: Not Supported 00:16:44.777 Delete Endurance Group: Not Supported 00:16:44.777 Delete NVM Set: Not Supported 00:16:44.777 Extended LBA Formats Supported: Not Supported 00:16:44.777 Flexible Data Placement Supported: Not Supported 00:16:44.777 00:16:44.777 Controller Memory Buffer Support 00:16:44.777 ================================ 00:16:44.777 Supported: No 00:16:44.777 00:16:44.777 Persistent Memory Region Support 00:16:44.777 ================================ 00:16:44.777 Supported: No 00:16:44.777 00:16:44.777 Admin Command Set Attributes 00:16:44.777 ============================ 00:16:44.777 Security Send/Receive: Not Supported 00:16:44.777 Format NVM: Not Supported 00:16:44.777 Firmware Activate/Download: Not Supported 00:16:44.777 Namespace Management: Not Supported 00:16:44.777 Device Self-Test: Not Supported 00:16:44.777 Directives: Not Supported 00:16:44.777 NVMe-MI: Not Supported 00:16:44.777 Virtualization Management: Not Supported 00:16:44.777 Doorbell Buffer Config: Not Supported 00:16:44.777 Get LBA Status Capability: Not Supported 00:16:44.777 Command & Feature Lockdown Capability: Not Supported 00:16:44.777 Abort Command Limit: 4 00:16:44.777 Async Event Request Limit: 4 00:16:44.777 Number of Firmware Slots: N/A 00:16:44.777 Firmware Slot 1 Read-Only: N/A 00:16:44.777 Firmware Activation Without Reset: [2024-07-15 14:33:24.336421] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.777 [2024-07-15 14:33:24.336429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.777 [2024-07-15 14:33:24.336432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.777 [2024-07-15 14:33:24.336437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1594140) on tqpair=0x1550a60 00:16:44.777 [2024-07-15 14:33:24.336445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.777 [2024-07-15 14:33:24.336451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.777 [2024-07-15 14:33:24.336455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.777 [2024-07-15 14:33:24.336459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15942c0) on tqpair=0x1550a60 00:16:44.777 N/A 00:16:44.777 Multiple Update Detection Support: N/A 00:16:44.777 Firmware Update Granularity: No Information Provided 00:16:44.777 Per-Namespace SMART Log: No 00:16:44.777 Asymmetric Namespace Access Log Page: Not Supported 00:16:44.777 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:44.777 Command Effects Log Page: Supported 00:16:44.777 Get Log Page Extended Data: Supported 00:16:44.777 Telemetry Log Pages: Not Supported 00:16:44.777 Persistent Event Log Pages: Not Supported 00:16:44.777 Supported Log Pages Log Page: May Support 00:16:44.777 Commands Supported & Effects Log Page: Not Supported 00:16:44.777 Feature Identifiers & Effects Log Page:May Support 00:16:44.777 NVMe-MI Commands & Effects Log Page: May Support 00:16:44.777 Data Area 4 for Telemetry Log: Not Supported 00:16:44.777 Error Log Page Entries Supported: 128 00:16:44.777 Keep Alive: Supported 00:16:44.777 Keep Alive Granularity: 10000 ms 00:16:44.777 00:16:44.777 NVM Command Set Attributes 00:16:44.777 ========================== 00:16:44.777 Submission Queue Entry Size 00:16:44.777 Max: 64 00:16:44.777 Min: 64 00:16:44.777 Completion Queue Entry Size 00:16:44.777 Max: 16 00:16:44.777 Min: 16 00:16:44.777 Number of Namespaces: 32 00:16:44.777 Compare Command: Supported 00:16:44.777 Write Uncorrectable Command: Not Supported 00:16:44.777 Dataset Management Command: Supported 00:16:44.777 Write Zeroes Command: Supported 00:16:44.777 Set Features Save Field: Not Supported 00:16:44.777 Reservations: Supported 00:16:44.777 Timestamp: Not Supported 00:16:44.777 Copy: Supported 00:16:44.777 Volatile Write Cache: Present 00:16:44.777 Atomic Write Unit (Normal): 1 00:16:44.777 Atomic Write Unit (PFail): 1 00:16:44.777 Atomic Compare & Write Unit: 1 00:16:44.777 Fused Compare & Write: Supported 00:16:44.777 Scatter-Gather List 00:16:44.777 SGL Command Set: Supported 00:16:44.777 SGL Keyed: Supported 00:16:44.777 SGL Bit Bucket Descriptor: Not Supported 00:16:44.777 SGL Metadata Pointer: Not Supported 00:16:44.777 Oversized SGL: Not Supported 00:16:44.777 SGL Metadata Address: Not Supported 00:16:44.777 SGL Offset: Supported 00:16:44.777 Transport SGL Data Block: Not Supported 00:16:44.777 Replay Protected Memory Block: Not Supported 00:16:44.777 00:16:44.777 Firmware Slot Information 00:16:44.777 ========================= 00:16:44.777 Active slot: 1 00:16:44.777 Slot 1 Firmware Revision: 24.09 00:16:44.777 00:16:44.777 00:16:44.777 Commands Supported and Effects 00:16:44.777 ============================== 00:16:44.777 Admin Commands 00:16:44.777 -------------- 00:16:44.777 Get Log Page (02h): Supported 00:16:44.777 Identify (06h): Supported 00:16:44.777 Abort (08h): Supported 00:16:44.777 Set Features (09h): Supported 00:16:44.777 Get Features (0Ah): Supported 00:16:44.777 Asynchronous Event Request (0Ch): Supported 00:16:44.777 Keep Alive (18h): Supported 00:16:44.777 I/O Commands 00:16:44.777 ------------ 00:16:44.777 Flush (00h): Supported LBA-Change 00:16:44.777 Write (01h): Supported LBA-Change 00:16:44.777 Read (02h): Supported 00:16:44.777 Compare (05h): Supported 00:16:44.777 Write Zeroes (08h): Supported LBA-Change 00:16:44.777 Dataset Management (09h): Supported LBA-Change 00:16:44.777 Copy (19h): Supported LBA-Change 00:16:44.777 00:16:44.777 Error Log 00:16:44.777 ========= 00:16:44.777 00:16:44.777 Arbitration 00:16:44.777 =========== 00:16:44.777 Arbitration Burst: 1 00:16:44.777 00:16:44.777 Power Management 00:16:44.777 ================ 00:16:44.777 Number of Power States: 1 00:16:44.777 Current Power State: Power State #0 00:16:44.777 Power State #0: 00:16:44.777 Max Power: 0.00 W 00:16:44.777 Non-Operational State: Operational 00:16:44.777 Entry Latency: Not Reported 00:16:44.777 Exit Latency: Not Reported 00:16:44.777 Relative Read Throughput: 0 00:16:44.777 Relative Read Latency: 0 00:16:44.777 Relative Write Throughput: 0 00:16:44.777 Relative Write Latency: 0 00:16:44.777 Idle Power: Not Reported 00:16:44.777 Active Power: Not Reported 00:16:44.777 Non-Operational Permissive Mode: Not Supported 00:16:44.777 00:16:44.777 Health Information 00:16:44.777 ================== 00:16:44.777 Critical Warnings: 00:16:44.777 Available Spare Space: OK 00:16:44.777 Temperature: OK 00:16:44.777 Device Reliability: OK 00:16:44.777 Read Only: No 00:16:44.777 Volatile Memory Backup: OK 00:16:44.777 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:44.777 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:44.777 Available Spare: 0% 00:16:44.777 Available Spare Threshold: 0% 00:16:44.777 Life Percentage Used:[2024-07-15 14:33:24.336569] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.777 [2024-07-15 14:33:24.336578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1550a60) 00:16:44.777 [2024-07-15 14:33:24.336586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.777 [2024-07-15 14:33:24.336611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15942c0, cid 7, qid 0 00:16:44.777 [2024-07-15 14:33:24.336680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.777 [2024-07-15 14:33:24.336687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.777 [2024-07-15 14:33:24.336691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.777 [2024-07-15 14:33:24.340707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15942c0) on tqpair=0x1550a60 00:16:44.778 [2024-07-15 14:33:24.340767] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:44.778 [2024-07-15 14:33:24.340782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593840) on tqpair=0x1550a60 00:16:44.778 [2024-07-15 14:33:24.340790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.778 [2024-07-15 14:33:24.340797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15939c0) on tqpair=0x1550a60 00:16:44.778 [2024-07-15 14:33:24.340802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.778 [2024-07-15 14:33:24.340808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593b40) on tqpair=0x1550a60 00:16:44.778 [2024-07-15 14:33:24.340813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.778 [2024-07-15 14:33:24.340818] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.778 [2024-07-15 14:33:24.340823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.778 [2024-07-15 14:33:24.340833] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.340838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.340843] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.778 [2024-07-15 14:33:24.340852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.778 [2024-07-15 14:33:24.340880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.778 [2024-07-15 14:33:24.340942] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.778 [2024-07-15 14:33:24.340950] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.778 [2024-07-15 14:33:24.340954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.340958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.778 [2024-07-15 14:33:24.340966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.340971] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.340975] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.778 [2024-07-15 14:33:24.340983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.778 [2024-07-15 14:33:24.341006] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.778 [2024-07-15 14:33:24.341083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.778 [2024-07-15 14:33:24.341090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.778 [2024-07-15 14:33:24.341094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.778 [2024-07-15 14:33:24.341103] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:44.778 [2024-07-15 14:33:24.341121] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:44.778 [2024-07-15 14:33:24.341131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.778 [2024-07-15 14:33:24.341147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.778 [2024-07-15 14:33:24.341166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.778 [2024-07-15 14:33:24.341223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.778 [2024-07-15 14:33:24.341230] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.778 [2024-07-15 14:33:24.341234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.778 [2024-07-15 14:33:24.341250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341255] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341259] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.778 [2024-07-15 14:33:24.341267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.778 [2024-07-15 14:33:24.341285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.778 [2024-07-15 14:33:24.341338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.778 [2024-07-15 14:33:24.341345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.778 [2024-07-15 14:33:24.341349] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.778 [2024-07-15 14:33:24.341364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.778 [2024-07-15 14:33:24.341381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.778 [2024-07-15 14:33:24.341399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.778 [2024-07-15 14:33:24.341455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.778 [2024-07-15 14:33:24.341471] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.778 [2024-07-15 14:33:24.341476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.778 [2024-07-15 14:33:24.341492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.778 [2024-07-15 14:33:24.341509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.778 [2024-07-15 14:33:24.341529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.778 [2024-07-15 14:33:24.341581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.778 [2024-07-15 14:33:24.341595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.778 [2024-07-15 14:33:24.341600] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.778 [2024-07-15 14:33:24.341616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341620] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.778 [2024-07-15 14:33:24.341632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.778 [2024-07-15 14:33:24.341652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.778 [2024-07-15 14:33:24.341721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.778 [2024-07-15 14:33:24.341733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.778 [2024-07-15 14:33:24.341738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.778 [2024-07-15 14:33:24.341754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.778 [2024-07-15 14:33:24.341770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.778 [2024-07-15 14:33:24.341791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.778 [2024-07-15 14:33:24.341847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.778 [2024-07-15 14:33:24.341853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.778 [2024-07-15 14:33:24.341857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.778 [2024-07-15 14:33:24.341873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.778 [2024-07-15 14:33:24.341882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.778 [2024-07-15 14:33:24.341889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.778 [2024-07-15 14:33:24.341908] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.778 [2024-07-15 14:33:24.341974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.778 [2024-07-15 14:33:24.341986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.778 [2024-07-15 14:33:24.341990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.341995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.779 [2024-07-15 14:33:24.342006] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342011] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342015] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.779 [2024-07-15 14:33:24.342023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.779 [2024-07-15 14:33:24.342043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.779 [2024-07-15 14:33:24.342098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.779 [2024-07-15 14:33:24.342110] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.779 [2024-07-15 14:33:24.342114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342118] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.779 [2024-07-15 14:33:24.342129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342134] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342138] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.779 [2024-07-15 14:33:24.342146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.779 [2024-07-15 14:33:24.342166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.779 [2024-07-15 14:33:24.342219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.779 [2024-07-15 14:33:24.342241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.779 [2024-07-15 14:33:24.342245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.779 [2024-07-15 14:33:24.342261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342270] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.779 [2024-07-15 14:33:24.342278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.779 [2024-07-15 14:33:24.342298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.779 [2024-07-15 14:33:24.342349] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.779 [2024-07-15 14:33:24.342360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.779 [2024-07-15 14:33:24.342364] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.779 [2024-07-15 14:33:24.342380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.779 [2024-07-15 14:33:24.342396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.779 [2024-07-15 14:33:24.342416] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.779 [2024-07-15 14:33:24.342472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.779 [2024-07-15 14:33:24.342483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.779 [2024-07-15 14:33:24.342487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.779 [2024-07-15 14:33:24.342503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342508] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342512] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.779 [2024-07-15 14:33:24.342519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.779 [2024-07-15 14:33:24.342539] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.779 [2024-07-15 14:33:24.342593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.779 [2024-07-15 14:33:24.342600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.779 [2024-07-15 14:33:24.342604] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342608] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.779 [2024-07-15 14:33:24.342619] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.779 [2024-07-15 14:33:24.342636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.779 [2024-07-15 14:33:24.342655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.779 [2024-07-15 14:33:24.342725] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.779 [2024-07-15 14:33:24.342736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.779 [2024-07-15 14:33:24.342741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.779 [2024-07-15 14:33:24.342757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342762] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342766] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.779 [2024-07-15 14:33:24.342774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.779 [2024-07-15 14:33:24.342795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.779 [2024-07-15 14:33:24.342855] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.779 [2024-07-15 14:33:24.342866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.779 [2024-07-15 14:33:24.342870] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.779 [2024-07-15 14:33:24.342886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342891] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.779 [2024-07-15 14:33:24.342902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.779 [2024-07-15 14:33:24.342922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.779 [2024-07-15 14:33:24.342976] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.779 [2024-07-15 14:33:24.342983] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.779 [2024-07-15 14:33:24.342987] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.342992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.779 [2024-07-15 14:33:24.343003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.343007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.343011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.779 [2024-07-15 14:33:24.343019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.779 [2024-07-15 14:33:24.343038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.779 [2024-07-15 14:33:24.343091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.779 [2024-07-15 14:33:24.343102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.779 [2024-07-15 14:33:24.343106] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.343111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.779 [2024-07-15 14:33:24.343122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.343127] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.343131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.779 [2024-07-15 14:33:24.343139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.779 [2024-07-15 14:33:24.343159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.779 [2024-07-15 14:33:24.343215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.779 [2024-07-15 14:33:24.343222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.779 [2024-07-15 14:33:24.343226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.343230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.779 [2024-07-15 14:33:24.343241] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.343246] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.343250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.779 [2024-07-15 14:33:24.343258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.779 [2024-07-15 14:33:24.343279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.779 [2024-07-15 14:33:24.343333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.779 [2024-07-15 14:33:24.343345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.779 [2024-07-15 14:33:24.343350] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.779 [2024-07-15 14:33:24.343355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.779 [2024-07-15 14:33:24.343367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343372] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.780 [2024-07-15 14:33:24.343385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.780 [2024-07-15 14:33:24.343405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.780 [2024-07-15 14:33:24.343460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.780 [2024-07-15 14:33:24.343467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.780 [2024-07-15 14:33:24.343472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.780 [2024-07-15 14:33:24.343487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.780 [2024-07-15 14:33:24.343505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.780 [2024-07-15 14:33:24.343525] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.780 [2024-07-15 14:33:24.343578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.780 [2024-07-15 14:33:24.343590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.780 [2024-07-15 14:33:24.343595] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.780 [2024-07-15 14:33:24.343612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343621] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.780 [2024-07-15 14:33:24.343630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.780 [2024-07-15 14:33:24.343649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.780 [2024-07-15 14:33:24.343715] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.780 [2024-07-15 14:33:24.343724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.780 [2024-07-15 14:33:24.343729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.780 [2024-07-15 14:33:24.343745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.780 [2024-07-15 14:33:24.343763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.780 [2024-07-15 14:33:24.343786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.780 [2024-07-15 14:33:24.343843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.780 [2024-07-15 14:33:24.343850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.780 [2024-07-15 14:33:24.343854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.780 [2024-07-15 14:33:24.343870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.780 [2024-07-15 14:33:24.343887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.780 [2024-07-15 14:33:24.343907] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.780 [2024-07-15 14:33:24.343960] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.780 [2024-07-15 14:33:24.343972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.780 [2024-07-15 14:33:24.343977] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343982] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.780 [2024-07-15 14:33:24.343993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.343998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.780 [2024-07-15 14:33:24.344011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.780 [2024-07-15 14:33:24.344031] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.780 [2024-07-15 14:33:24.344083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.780 [2024-07-15 14:33:24.344090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.780 [2024-07-15 14:33:24.344094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.780 [2024-07-15 14:33:24.344110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344120] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.780 [2024-07-15 14:33:24.344128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.780 [2024-07-15 14:33:24.344147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.780 [2024-07-15 14:33:24.344200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.780 [2024-07-15 14:33:24.344212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.780 [2024-07-15 14:33:24.344217] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.780 [2024-07-15 14:33:24.344233] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344242] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.780 [2024-07-15 14:33:24.344250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.780 [2024-07-15 14:33:24.344270] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.780 [2024-07-15 14:33:24.344322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.780 [2024-07-15 14:33:24.344329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.780 [2024-07-15 14:33:24.344333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.780 [2024-07-15 14:33:24.344349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.780 [2024-07-15 14:33:24.344367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.780 [2024-07-15 14:33:24.344386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.780 [2024-07-15 14:33:24.344440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.780 [2024-07-15 14:33:24.344447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.780 [2024-07-15 14:33:24.344451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.780 [2024-07-15 14:33:24.344467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344472] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344476] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.780 [2024-07-15 14:33:24.344484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.780 [2024-07-15 14:33:24.344504] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.780 [2024-07-15 14:33:24.344557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.780 [2024-07-15 14:33:24.344569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.780 [2024-07-15 14:33:24.344574] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.780 [2024-07-15 14:33:24.344590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.344600] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.780 [2024-07-15 14:33:24.344608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.780 [2024-07-15 14:33:24.344628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.780 [2024-07-15 14:33:24.344682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.780 [2024-07-15 14:33:24.344693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.780 [2024-07-15 14:33:24.348718] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.348727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.780 [2024-07-15 14:33:24.348744] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.348749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.348754] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1550a60) 00:16:44.780 [2024-07-15 14:33:24.348763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:44.780 [2024-07-15 14:33:24.348791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1593cc0, cid 3, qid 0 00:16:44.780 [2024-07-15 14:33:24.348853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:44.780 [2024-07-15 14:33:24.348861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:44.780 [2024-07-15 14:33:24.348865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:44.780 [2024-07-15 14:33:24.348869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1593cc0) on tqpair=0x1550a60 00:16:44.780 [2024-07-15 14:33:24.348878] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:16:45.039 0% 00:16:45.039 Data Units Read: 0 00:16:45.039 Data Units Written: 0 00:16:45.039 Host Read Commands: 0 00:16:45.039 Host Write Commands: 0 00:16:45.039 Controller Busy Time: 0 minutes 00:16:45.039 Power Cycles: 0 00:16:45.040 Power On Hours: 0 hours 00:16:45.040 Unsafe Shutdowns: 0 00:16:45.040 Unrecoverable Media Errors: 0 00:16:45.040 Lifetime Error Log Entries: 0 00:16:45.040 Warning Temperature Time: 0 minutes 00:16:45.040 Critical Temperature Time: 0 minutes 00:16:45.040 00:16:45.040 Number of Queues 00:16:45.040 ================ 00:16:45.040 Number of I/O Submission Queues: 127 00:16:45.040 Number of I/O Completion Queues: 127 00:16:45.040 00:16:45.040 Active Namespaces 00:16:45.040 ================= 00:16:45.040 Namespace ID:1 00:16:45.040 Error Recovery Timeout: Unlimited 00:16:45.040 Command Set Identifier: NVM (00h) 00:16:45.040 Deallocate: Supported 00:16:45.040 Deallocated/Unwritten Error: Not Supported 00:16:45.040 Deallocated Read Value: Unknown 00:16:45.040 Deallocate in Write Zeroes: Not Supported 00:16:45.040 Deallocated Guard Field: 0xFFFF 00:16:45.040 Flush: Supported 00:16:45.040 Reservation: Supported 00:16:45.040 Namespace Sharing Capabilities: Multiple Controllers 00:16:45.040 Size (in LBAs): 131072 (0GiB) 00:16:45.040 Capacity (in LBAs): 131072 (0GiB) 00:16:45.040 Utilization (in LBAs): 131072 (0GiB) 00:16:45.040 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:45.040 EUI64: ABCDEF0123456789 00:16:45.040 UUID: ab2dc20f-e10e-46e6-ac5b-35d014a0e19c 00:16:45.040 Thin Provisioning: Not Supported 00:16:45.040 Per-NS Atomic Units: Yes 00:16:45.040 Atomic Boundary Size (Normal): 0 00:16:45.040 Atomic Boundary Size (PFail): 0 00:16:45.040 Atomic Boundary Offset: 0 00:16:45.040 Maximum Single Source Range Length: 65535 00:16:45.040 Maximum Copy Length: 65535 00:16:45.040 Maximum Source Range Count: 1 00:16:45.040 NGUID/EUI64 Never Reused: No 00:16:45.040 Namespace Write Protected: No 00:16:45.040 Number of LBA Formats: 1 00:16:45.040 Current LBA Format: LBA Format #00 00:16:45.040 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:45.040 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:45.040 rmmod nvme_tcp 00:16:45.040 rmmod nvme_fabrics 00:16:45.040 rmmod nvme_keyring 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86774 ']' 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86774 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 86774 ']' 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 86774 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86774 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:45.040 killing process with pid 86774 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86774' 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 86774 00:16:45.040 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 86774 00:16:45.299 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:45.299 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:45.299 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:45.299 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.299 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:45.299 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.299 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.299 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.299 14:33:24 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:45.299 ************************************ 00:16:45.299 END TEST nvmf_identify 00:16:45.299 ************************************ 00:16:45.299 00:16:45.299 real 0m2.571s 00:16:45.299 user 0m7.409s 00:16:45.299 sys 0m0.563s 00:16:45.300 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:45.300 14:33:24 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.300 14:33:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:45.300 14:33:24 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:45.300 14:33:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:45.300 14:33:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.300 14:33:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:45.300 ************************************ 00:16:45.300 START TEST nvmf_perf 00:16:45.300 ************************************ 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:45.300 * Looking for test storage... 00:16:45.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:45.300 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:45.559 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:45.559 Cannot find device "nvmf_tgt_br" 00:16:45.559 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:16:45.559 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:45.559 Cannot find device "nvmf_tgt_br2" 00:16:45.559 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:16:45.559 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:45.559 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:45.559 Cannot find device "nvmf_tgt_br" 00:16:45.559 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:16:45.559 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:45.559 Cannot find device "nvmf_tgt_br2" 00:16:45.559 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:16:45.559 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:45.559 14:33:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:45.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:45.559 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:45.559 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:45.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:16:45.818 00:16:45.818 --- 10.0.0.2 ping statistics --- 00:16:45.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.818 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:45.818 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:45.818 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:16:45.818 00:16:45.818 --- 10.0.0.3 ping statistics --- 00:16:45.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.818 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:45.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:45.818 00:16:45.818 --- 10.0.0.1 ping statistics --- 00:16:45.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.818 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=86999 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 86999 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 86999 ']' 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.818 14:33:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:45.818 [2024-07-15 14:33:25.290983] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:45.818 [2024-07-15 14:33:25.291083] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.076 [2024-07-15 14:33:25.428084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.076 [2024-07-15 14:33:25.486899] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.076 [2024-07-15 14:33:25.486956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.076 [2024-07-15 14:33:25.486968] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.076 [2024-07-15 14:33:25.486976] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.076 [2024-07-15 14:33:25.486983] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.077 [2024-07-15 14:33:25.487140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.077 [2024-07-15 14:33:25.487240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.077 [2024-07-15 14:33:25.487862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.077 [2024-07-15 14:33:25.487869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.640 14:33:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.640 14:33:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:16:46.640 14:33:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:46.640 14:33:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:46.640 14:33:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:46.898 14:33:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.898 14:33:26 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:46.898 14:33:26 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:47.157 14:33:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:47.157 14:33:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:47.415 14:33:26 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:47.415 14:33:26 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.674 14:33:27 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:47.674 14:33:27 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:47.674 14:33:27 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:47.674 14:33:27 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:47.674 14:33:27 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:47.931 [2024-07-15 14:33:27.421359] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.931 14:33:27 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:48.189 14:33:27 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:48.189 14:33:27 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:48.463 14:33:27 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:48.463 14:33:27 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:48.721 14:33:28 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.978 [2024-07-15 14:33:28.378565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.978 14:33:28 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:49.235 14:33:28 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:49.235 14:33:28 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:49.235 14:33:28 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:49.235 14:33:28 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:50.608 Initializing NVMe Controllers 00:16:50.608 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:50.608 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:50.608 Initialization complete. Launching workers. 00:16:50.608 ======================================================== 00:16:50.608 Latency(us) 00:16:50.608 Device Information : IOPS MiB/s Average min max 00:16:50.608 PCIE (0000:00:10.0) NSID 1 from core 0: 24691.11 96.45 1295.63 321.29 6883.06 00:16:50.608 ======================================================== 00:16:50.608 Total : 24691.11 96.45 1295.63 321.29 6883.06 00:16:50.608 00:16:50.608 14:33:29 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:51.542 Initializing NVMe Controllers 00:16:51.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:51.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:51.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:51.542 Initialization complete. Launching workers. 00:16:51.542 ======================================================== 00:16:51.542 Latency(us) 00:16:51.542 Device Information : IOPS MiB/s Average min max 00:16:51.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3380.58 13.21 295.50 119.79 6133.08 00:16:51.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.51 0.48 8162.07 5986.10 12045.97 00:16:51.542 ======================================================== 00:16:51.542 Total : 3503.09 13.68 570.61 119.79 12045.97 00:16:51.542 00:16:51.800 14:33:31 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:53.174 Initializing NVMe Controllers 00:16:53.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:53.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:53.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:53.174 Initialization complete. Launching workers. 00:16:53.174 ======================================================== 00:16:53.174 Latency(us) 00:16:53.174 Device Information : IOPS MiB/s Average min max 00:16:53.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8390.43 32.78 3817.94 697.51 9165.79 00:16:53.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2710.88 10.59 11927.15 6744.31 20290.54 00:16:53.174 ======================================================== 00:16:53.174 Total : 11101.31 43.36 5798.16 697.51 20290.54 00:16:53.174 00:16:53.174 14:33:32 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:53.174 14:33:32 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:55.740 Initializing NVMe Controllers 00:16:55.740 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:55.740 Controller IO queue size 128, less than required. 00:16:55.740 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.740 Controller IO queue size 128, less than required. 00:16:55.740 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:55.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:55.740 Initialization complete. Launching workers. 00:16:55.740 ======================================================== 00:16:55.740 Latency(us) 00:16:55.740 Device Information : IOPS MiB/s Average min max 00:16:55.740 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1556.95 389.24 83973.82 41564.89 166178.03 00:16:55.740 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 549.48 137.37 240226.42 104074.90 379451.42 00:16:55.740 ======================================================== 00:16:55.740 Total : 2106.44 526.61 124733.76 41564.89 379451.42 00:16:55.740 00:16:55.740 14:33:35 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:16:55.998 Initializing NVMe Controllers 00:16:55.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:55.998 Controller IO queue size 128, less than required. 00:16:55.998 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.998 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:55.998 Controller IO queue size 128, less than required. 00:16:55.998 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:55.998 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:55.998 WARNING: Some requested NVMe devices were skipped 00:16:55.998 No valid NVMe controllers or AIO or URING devices found 00:16:55.998 14:33:35 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:16:58.527 Initializing NVMe Controllers 00:16:58.527 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:58.527 Controller IO queue size 128, less than required. 00:16:58.527 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:58.527 Controller IO queue size 128, less than required. 00:16:58.527 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:58.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:58.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:58.527 Initialization complete. Launching workers. 00:16:58.527 00:16:58.527 ==================== 00:16:58.527 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:58.527 TCP transport: 00:16:58.527 polls: 7869 00:16:58.527 idle_polls: 4164 00:16:58.527 sock_completions: 3705 00:16:58.527 nvme_completions: 4805 00:16:58.527 submitted_requests: 7254 00:16:58.527 queued_requests: 1 00:16:58.527 00:16:58.527 ==================== 00:16:58.527 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:58.527 TCP transport: 00:16:58.527 polls: 10285 00:16:58.527 idle_polls: 7120 00:16:58.527 sock_completions: 3165 00:16:58.527 nvme_completions: 6001 00:16:58.527 submitted_requests: 8976 00:16:58.527 queued_requests: 1 00:16:58.527 ======================================================== 00:16:58.527 Latency(us) 00:16:58.527 Device Information : IOPS MiB/s Average min max 00:16:58.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1200.92 300.23 108570.81 64320.21 269583.27 00:16:58.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1499.90 374.97 86028.08 35675.80 164657.69 00:16:58.527 ======================================================== 00:16:58.527 Total : 2700.82 675.20 96051.71 35675.80 269583.27 00:16:58.527 00:16:58.527 14:33:37 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:58.527 14:33:37 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.784 14:33:38 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:58.784 14:33:38 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:58.784 14:33:38 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:58.785 rmmod nvme_tcp 00:16:58.785 rmmod nvme_fabrics 00:16:58.785 rmmod nvme_keyring 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 86999 ']' 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 86999 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 86999 ']' 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 86999 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86999 00:16:58.785 killing process with pid 86999 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86999' 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 86999 00:16:58.785 14:33:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 86999 00:16:59.719 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:59.719 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:59.719 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:59.719 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.719 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:59.719 14:33:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.719 14:33:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.719 14:33:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.719 14:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:59.719 00:16:59.719 real 0m14.234s 00:16:59.719 user 0m52.796s 00:16:59.719 sys 0m3.463s 00:16:59.719 14:33:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:59.719 14:33:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:59.719 ************************************ 00:16:59.719 END TEST nvmf_perf 00:16:59.719 ************************************ 00:16:59.719 14:33:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:59.719 14:33:39 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:59.719 14:33:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:59.719 14:33:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:59.719 14:33:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:59.719 ************************************ 00:16:59.719 START TEST nvmf_fio_host 00:16:59.719 ************************************ 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:59.719 * Looking for test storage... 00:16:59.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.719 14:33:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:59.720 Cannot find device "nvmf_tgt_br" 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:59.720 Cannot find device "nvmf_tgt_br2" 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:59.720 Cannot find device "nvmf_tgt_br" 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:59.720 Cannot find device "nvmf_tgt_br2" 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:59.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:59.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:59.720 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:59.978 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:59.978 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:59.978 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:59.978 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:59.978 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:59.978 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:59.978 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:59.978 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:59.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:16:59.979 00:16:59.979 --- 10.0.0.2 ping statistics --- 00:16:59.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.979 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:59.979 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:59.979 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:16:59.979 00:16:59.979 --- 10.0.0.3 ping statistics --- 00:16:59.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.979 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:59.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:59.979 00:16:59.979 --- 10.0.0.1 ping statistics --- 00:16:59.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.979 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87475 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87475 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87475 ']' 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:59.979 14:33:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.979 [2024-07-15 14:33:39.544565] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:16:59.979 [2024-07-15 14:33:39.544683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.237 [2024-07-15 14:33:39.689831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:00.237 [2024-07-15 14:33:39.749991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.237 [2024-07-15 14:33:39.750046] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.237 [2024-07-15 14:33:39.750057] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.237 [2024-07-15 14:33:39.750066] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.237 [2024-07-15 14:33:39.750073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.237 [2024-07-15 14:33:39.750383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.237 [2024-07-15 14:33:39.750915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.237 [2024-07-15 14:33:39.750965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.237 [2024-07-15 14:33:39.751086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.172 14:33:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.172 14:33:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:17:01.172 14:33:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:01.436 [2024-07-15 14:33:40.864944] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:01.436 14:33:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:01.436 14:33:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:01.436 14:33:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.436 14:33:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:01.694 Malloc1 00:17:01.951 14:33:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:02.209 14:33:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:02.467 14:33:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:02.725 [2024-07-15 14:33:42.075379] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:02.725 14:33:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:02.982 14:33:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:02.982 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:02.982 fio-3.35 00:17:02.982 Starting 1 thread 00:17:05.508 00:17:05.508 test: (groupid=0, jobs=1): err= 0: pid=87605: Mon Jul 15 14:33:44 2024 00:17:05.508 read: IOPS=9041, BW=35.3MiB/s (37.0MB/s)(70.9MiB/2007msec) 00:17:05.508 slat (usec): min=2, max=385, avg= 2.78, stdev= 3.67 00:17:05.508 clat (usec): min=3522, max=14844, avg=7408.38, stdev=774.26 00:17:05.508 lat (usec): min=3572, max=14849, avg=7411.16, stdev=774.28 00:17:05.508 clat percentiles (usec): 00:17:05.508 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 6783], 20.00th=[ 6980], 00:17:05.508 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7308], 60.00th=[ 7439], 00:17:05.508 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8291], 00:17:05.508 | 99.00th=[10945], 99.50th=[11994], 99.90th=[13566], 99.95th=[14615], 00:17:05.508 | 99.99th=[14877] 00:17:05.508 bw ( KiB/s): min=35512, max=36704, per=99.95%, avg=36148.00, stdev=489.70, samples=4 00:17:05.508 iops : min= 8878, max= 9176, avg=9037.00, stdev=122.43, samples=4 00:17:05.508 write: IOPS=9054, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2007msec); 0 zone resets 00:17:05.508 slat (usec): min=2, max=284, avg= 2.90, stdev= 2.41 00:17:05.508 clat (usec): min=2566, max=12308, avg=6667.70, stdev=572.52 00:17:05.508 lat (usec): min=2580, max=12310, avg=6670.61, stdev=572.47 00:17:05.508 clat percentiles (usec): 00:17:05.508 | 1.00th=[ 5211], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 6325], 00:17:05.508 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6652], 60.00th=[ 6783], 00:17:05.508 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7439], 00:17:05.508 | 99.00th=[ 8455], 99.50th=[ 9503], 99.90th=[10945], 99.95th=[11469], 00:17:05.508 | 99.99th=[12256] 00:17:05.508 bw ( KiB/s): min=36056, max=36560, per=100.00%, avg=36236.00, stdev=222.52, samples=4 00:17:05.508 iops : min= 9014, max= 9140, avg=9059.00, stdev=55.63, samples=4 00:17:05.508 lat (msec) : 4=0.07%, 10=99.01%, 20=0.92% 00:17:05.508 cpu : usr=62.01%, sys=26.07%, ctx=15, majf=0, minf=7 00:17:05.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:05.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:05.508 issued rwts: total=18146,18173,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:05.508 00:17:05.508 Run status group 0 (all jobs): 00:17:05.508 READ: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.9MiB (74.3MB), run=2007-2007msec 00:17:05.508 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.4MB), run=2007-2007msec 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:05.508 14:33:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:05.508 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:05.508 fio-3.35 00:17:05.508 Starting 1 thread 00:17:08.036 00:17:08.036 test: (groupid=0, jobs=1): err= 0: pid=87648: Mon Jul 15 14:33:47 2024 00:17:08.036 read: IOPS=7323, BW=114MiB/s (120MB/s)(230MiB/2007msec) 00:17:08.036 slat (usec): min=3, max=124, avg= 4.43, stdev= 3.19 00:17:08.036 clat (usec): min=3177, max=30090, avg=10300.17, stdev=3552.78 00:17:08.036 lat (usec): min=3192, max=30106, avg=10304.60, stdev=3554.62 00:17:08.036 clat percentiles (usec): 00:17:08.036 | 1.00th=[ 5276], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7701], 00:17:08.036 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[10552], 00:17:08.036 | 70.00th=[11338], 80.00th=[11863], 90.00th=[13042], 95.00th=[15139], 00:17:08.036 | 99.00th=[26346], 99.50th=[28181], 99.90th=[29492], 99.95th=[29754], 00:17:08.036 | 99.99th=[30016] 00:17:08.036 bw ( KiB/s): min=51200, max=66048, per=50.83%, avg=59568.00, stdev=6164.44, samples=4 00:17:08.036 iops : min= 3200, max= 4128, avg=3723.00, stdev=385.28, samples=4 00:17:08.036 write: IOPS=4213, BW=65.8MiB/s (69.0MB/s)(122MiB/1851msec); 0 zone resets 00:17:08.036 slat (usec): min=37, max=530, avg=40.80, stdev= 8.56 00:17:08.036 clat (usec): min=3249, max=37532, avg=12625.61, stdev=3761.24 00:17:08.036 lat (usec): min=3287, max=37593, avg=12666.40, stdev=3765.26 00:17:08.036 clat percentiles (usec): 00:17:08.036 | 1.00th=[ 7832], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10159], 00:17:08.036 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:17:08.036 | 70.00th=[13042], 80.00th=[14091], 90.00th=[15926], 95.00th=[20579], 00:17:08.036 | 99.00th=[27919], 99.50th=[30540], 99.90th=[31327], 99.95th=[31327], 00:17:08.036 | 99.99th=[37487] 00:17:08.036 bw ( KiB/s): min=53248, max=69088, per=91.78%, avg=61872.00, stdev=6524.47, samples=4 00:17:08.036 iops : min= 3328, max= 4318, avg=3867.00, stdev=407.78, samples=4 00:17:08.036 lat (msec) : 4=0.12%, 10=39.10%, 20=56.76%, 50=4.02% 00:17:08.036 cpu : usr=73.08%, sys=17.00%, ctx=5, majf=0, minf=10 00:17:08.036 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:17:08.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.036 issued rwts: total=14699,7799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.036 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.036 00:17:08.036 Run status group 0 (all jobs): 00:17:08.036 READ: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=230MiB (241MB), run=2007-2007msec 00:17:08.036 WRITE: bw=65.8MiB/s (69.0MB/s), 65.8MiB/s-65.8MiB/s (69.0MB/s-69.0MB/s), io=122MiB (128MB), run=1851-1851msec 00:17:08.036 14:33:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.036 14:33:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:08.036 14:33:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:08.036 14:33:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:08.036 14:33:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:08.036 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:08.036 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:17:08.036 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.036 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:17:08.036 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.036 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.036 rmmod nvme_tcp 00:17:08.294 rmmod nvme_fabrics 00:17:08.294 rmmod nvme_keyring 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87475 ']' 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87475 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87475 ']' 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87475 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87475 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:08.294 killing process with pid 87475 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87475' 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87475 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87475 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.294 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.553 14:33:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:08.553 00:17:08.553 real 0m8.846s 00:17:08.553 user 0m36.734s 00:17:08.553 sys 0m2.205s 00:17:08.553 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:08.553 ************************************ 00:17:08.553 END TEST nvmf_fio_host 00:17:08.553 ************************************ 00:17:08.553 14:33:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.553 14:33:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:08.553 14:33:47 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:08.553 14:33:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:08.553 14:33:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.553 14:33:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:08.553 ************************************ 00:17:08.553 START TEST nvmf_failover 00:17:08.553 ************************************ 00:17:08.553 14:33:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:08.553 * Looking for test storage... 00:17:08.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.553 14:33:48 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:08.554 Cannot find device "nvmf_tgt_br" 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:08.554 Cannot find device "nvmf_tgt_br2" 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:08.554 Cannot find device "nvmf_tgt_br" 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:08.554 Cannot find device "nvmf_tgt_br2" 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:17:08.554 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:08.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:08.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:08.813 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:08.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:17:08.813 00:17:08.813 --- 10.0.0.2 ping statistics --- 00:17:08.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.813 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:08.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:08.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:17:08.814 00:17:08.814 --- 10.0.0.3 ping statistics --- 00:17:08.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.814 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:08.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:08.814 00:17:08.814 --- 10.0.0.1 ping statistics --- 00:17:08.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.814 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87864 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87864 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87864 ']' 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.814 14:33:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:09.073 [2024-07-15 14:33:48.477025] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:09.073 [2024-07-15 14:33:48.477143] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.073 [2024-07-15 14:33:48.621130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:09.332 [2024-07-15 14:33:48.680462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.332 [2024-07-15 14:33:48.680523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.332 [2024-07-15 14:33:48.680535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.332 [2024-07-15 14:33:48.680543] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.332 [2024-07-15 14:33:48.680550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.332 [2024-07-15 14:33:48.681221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.332 [2024-07-15 14:33:48.681345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.332 [2024-07-15 14:33:48.681351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.898 14:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:09.898 14:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:09.898 14:33:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:09.898 14:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:09.898 14:33:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:09.898 14:33:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.898 14:33:49 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:10.155 [2024-07-15 14:33:49.642081] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.155 14:33:49 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:10.413 Malloc0 00:17:10.671 14:33:50 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:10.930 14:33:50 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:11.188 14:33:50 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.445 [2024-07-15 14:33:50.786950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.446 14:33:50 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:11.703 [2024-07-15 14:33:51.059160] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:11.703 14:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:11.961 [2024-07-15 14:33:51.315364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:11.961 14:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87976 00:17:11.961 14:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:11.961 14:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:11.961 14:33:51 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87976 /var/tmp/bdevperf.sock 00:17:11.961 14:33:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87976 ']' 00:17:11.961 14:33:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.961 14:33:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.961 14:33:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.961 14:33:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.961 14:33:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:12.894 14:33:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.894 14:33:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:12.894 14:33:52 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:13.156 NVMe0n1 00:17:13.156 14:33:52 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:13.414 00:17:13.414 14:33:52 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88024 00:17:13.414 14:33:52 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:13.414 14:33:52 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:14.811 14:33:53 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.811 [2024-07-15 14:33:54.208207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.208674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.208811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.208902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.208998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.209079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.209134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.209194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.209270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.209346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.209409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.209483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.209559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.209625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.209715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.209807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.209891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.209966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.210061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.210128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.210203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.210283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.210358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.210433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.210507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.210574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.210635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.811 [2024-07-15 14:33:54.210747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.210837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.210913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.210988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.211063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.211137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.211212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.211287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.211349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.211424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.211490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.211552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.211629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.211716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.211804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.211871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.211935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.212979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.213062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.213143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.213218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.213284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.213356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.213432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.213499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.213572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.213647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.213739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.213825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.213882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.213947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.214027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.214107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.214183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.214250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.214313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 [2024-07-15 14:33:54.214391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824f80 is same with the state(5) to be set 00:17:14.812 14:33:54 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:17:18.095 14:33:57 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:18.095 00:17:18.095 14:33:57 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:18.662 [2024-07-15 14:33:57.975125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.662 [2024-07-15 14:33:57.975990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 [2024-07-15 14:33:57.976391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826340 is same with the state(5) to be set 00:17:18.663 14:33:57 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:17:21.946 14:34:01 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.946 [2024-07-15 14:34:01.267460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.946 14:34:01 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:17:22.879 14:34:02 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:23.138 [2024-07-15 14:34:02.661691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.661994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.662002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.662020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.662029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.662037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.662045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.662054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.662062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 [2024-07-15 14:34:02.662070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x826a20 is same with the state(5) to be set 00:17:23.138 14:34:02 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 88024 00:17:29.704 0 00:17:29.704 14:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 87976 00:17:29.704 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87976 ']' 00:17:29.704 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87976 00:17:29.704 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:29.704 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:29.704 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87976 00:17:29.704 killing process with pid 87976 00:17:29.704 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:29.704 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:29.704 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87976' 00:17:29.704 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87976 00:17:29.704 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87976 00:17:29.704 14:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:29.704 [2024-07-15 14:33:51.392120] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:29.704 [2024-07-15 14:33:51.392245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87976 ] 00:17:29.704 [2024-07-15 14:33:51.529035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.704 [2024-07-15 14:33:51.599558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.704 Running I/O for 15 seconds... 00:17:29.704 [2024-07-15 14:33:54.214870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.214922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.214954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.214971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.214988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.704 [2024-07-15 14:33:54.215393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.704 [2024-07-15 14:33:54.215423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.704 [2024-07-15 14:33:54.215453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.704 [2024-07-15 14:33:54.215483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.704 [2024-07-15 14:33:54.215513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.704 [2024-07-15 14:33:54.215543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.704 [2024-07-15 14:33:54.215573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.704 [2024-07-15 14:33:54.215603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.704 [2024-07-15 14:33:54.215898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.704 [2024-07-15 14:33:54.215912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.215928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.215942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.215958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.215972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.215988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.705 [2024-07-15 14:33:54.216956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.216972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.705 [2024-07-15 14:33:54.216985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.217002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.705 [2024-07-15 14:33:54.217017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.217033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.705 [2024-07-15 14:33:54.217046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.217062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.705 [2024-07-15 14:33:54.217076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.217092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.705 [2024-07-15 14:33:54.217106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.217122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.705 [2024-07-15 14:33:54.217137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.217155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.705 [2024-07-15 14:33:54.217170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.705 [2024-07-15 14:33:54.217186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.217967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.217982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.706 [2024-07-15 14:33:54.218532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.706 [2024-07-15 14:33:54.218547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.707 [2024-07-15 14:33:54.218561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.218577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.707 [2024-07-15 14:33:54.218591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.218607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.707 [2024-07-15 14:33:54.218621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.218637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.707 [2024-07-15 14:33:54.218651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.218668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.707 [2024-07-15 14:33:54.218682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.218707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.707 [2024-07-15 14:33:54.218723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.218740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.707 [2024-07-15 14:33:54.218755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.218771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.707 [2024-07-15 14:33:54.218785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.218801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.707 [2024-07-15 14:33:54.218815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.218832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.707 [2024-07-15 14:33:54.218846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.218861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.707 [2024-07-15 14:33:54.218883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.218900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.707 [2024-07-15 14:33:54.218914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.218930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.707 [2024-07-15 14:33:54.218944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.218959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddbc90 is same with the state(5) to be set 00:17:29.707 [2024-07-15 14:33:54.218977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.707 [2024-07-15 14:33:54.218988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.707 [2024-07-15 14:33:54.218999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78472 len:8 PRP1 0x0 PRP2 0x0 00:17:29.707 [2024-07-15 14:33:54.219013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.219068] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ddbc90 was disconnected and freed. reset controller. 00:17:29.707 [2024-07-15 14:33:54.219086] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:29.707 [2024-07-15 14:33:54.219150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.707 [2024-07-15 14:33:54.219172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.219187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.707 [2024-07-15 14:33:54.219201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.219216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.707 [2024-07-15 14:33:54.219229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.219244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.707 [2024-07-15 14:33:54.219258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:54.219272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:29.707 [2024-07-15 14:33:54.223254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:29.707 [2024-07-15 14:33:54.223302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5fe30 (9): Bad file descriptor 00:17:29.707 [2024-07-15 14:33:54.258407] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:29.707 [2024-07-15 14:33:57.977692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.977753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.977780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.977819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.977838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.977853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.977869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.977883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.977899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.977912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.977928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.977942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.977958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.977971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.977987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.707 [2024-07-15 14:33:57.978426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.707 [2024-07-15 14:33:57.978442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.708 [2024-07-15 14:33:57.978456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.708 [2024-07-15 14:33:57.978485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.708 [2024-07-15 14:33:57.978515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.708 [2024-07-15 14:33:57.978545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.708 [2024-07-15 14:33:57.978574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.708 [2024-07-15 14:33:57.978604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.708 [2024-07-15 14:33:57.978641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.978671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.978715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.978749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.978778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.978808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.978837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:89024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.978867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.978896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:89040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.978931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:89048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.978962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.978978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.978992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.708 [2024-07-15 14:33:57.979487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.708 [2024-07-15 14:33:57.979503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.979517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.709 [2024-07-15 14:33:57.979547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.709 [2024-07-15 14:33:57.979577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.709 [2024-07-15 14:33:57.979606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.709 [2024-07-15 14:33:57.979636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.709 [2024-07-15 14:33:57.979665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.709 [2024-07-15 14:33:57.979705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.709 [2024-07-15 14:33:57.979738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.709 [2024-07-15 14:33:57.979768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:89200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.979806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.979837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.979866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.979896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.979928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.979958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.979974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:89248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.979989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.709 [2024-07-15 14:33:57.980834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.709 [2024-07-15 14:33:57.980850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.980864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.980880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.980894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.980910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.980925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.980942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.980956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.980972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.980994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.981025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.981055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.981085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.981114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.981144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.981173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.981202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.981232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.710 [2024-07-15 14:33:57.981272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.981319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89584 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.981333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.981362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.981372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89592 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.981386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.981418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.981431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89600 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.981445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.981470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.981481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89608 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.981494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.981519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.981530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89616 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.981543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.981567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.981578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89624 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.981591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.981618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.981629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89632 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.981642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.981666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.981677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89640 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.981690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.981730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.981740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89648 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.981754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.981778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.981788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89656 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.981802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.981897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.981910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89664 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.981924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.981949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.981959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89672 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.981972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.981986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.981996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.982018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89680 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.982032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.982046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.982056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.982067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89688 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.982080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.982094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.982106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.982117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89696 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.982131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.982145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.710 [2024-07-15 14:33:57.982156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.710 [2024-07-15 14:33:57.982166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:89704 len:8 PRP1 0x0 PRP2 0x0 00:17:29.710 [2024-07-15 14:33:57.982179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.982228] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dddd90 was disconnected and freed. reset controller. 00:17:29.710 [2024-07-15 14:33:57.982246] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:29.710 [2024-07-15 14:33:57.982304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.710 [2024-07-15 14:33:57.982325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.710 [2024-07-15 14:33:57.982341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.711 [2024-07-15 14:33:57.982366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:33:57.982392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.711 [2024-07-15 14:33:57.982405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:33:57.982420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.711 [2024-07-15 14:33:57.982433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:33:57.982450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:29.711 [2024-07-15 14:33:57.982499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5fe30 (9): Bad file descriptor 00:17:29.711 [2024-07-15 14:33:57.986436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:29.711 [2024-07-15 14:33:58.023968] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:29.711 [2024-07-15 14:34:02.662599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.662645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.662671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.662687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.662718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.662735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.662752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.662766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.662782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.662796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.662811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.662826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.662842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.662856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.662872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.662886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.662902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.662917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.662956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.662971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.662987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.711 [2024-07-15 14:34:02.663398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.711 [2024-07-15 14:34:02.663428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.711 [2024-07-15 14:34:02.663457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.711 [2024-07-15 14:34:02.663487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.711 [2024-07-15 14:34:02.663517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.711 [2024-07-15 14:34:02.663547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.711 [2024-07-15 14:34:02.663576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.711 [2024-07-15 14:34:02.663841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.711 [2024-07-15 14:34:02.663857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.711 [2024-07-15 14:34:02.663871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.663886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.663900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.663916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.663930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.663946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.663960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.663975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.663989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.712 [2024-07-15 14:34:02.664422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.712 [2024-07-15 14:34:02.664453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.712 [2024-07-15 14:34:02.664483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.712 [2024-07-15 14:34:02.664518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.712 [2024-07-15 14:34:02.664549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.712 [2024-07-15 14:34:02.664579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.712 [2024-07-15 14:34:02.664608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.664975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.712 [2024-07-15 14:34:02.664990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.712 [2024-07-15 14:34:02.665005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.713 [2024-07-15 14:34:02.665484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.665534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27360 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.665548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.665576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.665588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27368 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.665602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.665626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.665642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27376 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.665656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.665680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.665691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27384 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.665716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.665751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.665762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27392 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.665775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.665800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.665810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27400 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.665824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.665847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.665858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27408 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.665871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.665895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.665905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27416 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.665918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.665942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.665953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27424 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.665966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.665981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.665991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.666001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27432 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.666026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.666041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.666052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.666069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27440 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.666084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.666098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.666108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.666119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27448 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.666139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.666154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.666165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.666176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27456 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.666189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.666203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.666213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.666225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27464 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.666238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.666252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.666262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.666273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27472 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.666287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.666301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.666311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.713 [2024-07-15 14:34:02.666321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27480 len:8 PRP1 0x0 PRP2 0x0 00:17:29.713 [2024-07-15 14:34:02.666334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.713 [2024-07-15 14:34:02.666349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.713 [2024-07-15 14:34:02.666359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.666369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27488 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.666383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.666396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.666407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.666417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27496 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.666430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.666444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.666454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.666468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27504 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.666482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.666506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.666516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.666533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27512 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.666547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.666562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.666572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.666583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27520 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.666596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.666610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.666621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.666631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27528 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.666645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.666659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.666669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.666680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27536 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.666693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.666720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.666731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.666741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27544 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.666755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.666769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.666779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.666790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27552 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.666804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.666818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.666828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.666838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27560 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.666852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.666866] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.666876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.666890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27568 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.666904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.666925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.666936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.666947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27576 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.666961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.666975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.666985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.666996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27584 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.667009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.667023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.667033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.667044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26872 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.667057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.667072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.667082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.667093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26880 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.667106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.667120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.667130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.667141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26888 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.667154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.667168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.667178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.667189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26896 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.667202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.667217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.667227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.667238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26904 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.667251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.667265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.667275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.667289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26912 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.667309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.667324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.667334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.667345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26920 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.667359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.667373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:29.714 [2024-07-15 14:34:02.667383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:29.714 [2024-07-15 14:34:02.667393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26928 len:8 PRP1 0x0 PRP2 0x0 00:17:29.714 [2024-07-15 14:34:02.667407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.667455] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dddb80 was disconnected and freed. reset controller. 00:17:29.714 [2024-07-15 14:34:02.667472] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:29.714 [2024-07-15 14:34:02.667529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.714 [2024-07-15 14:34:02.667551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.667567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.714 [2024-07-15 14:34:02.667590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.667604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.714 [2024-07-15 14:34:02.667618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.667632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.714 [2024-07-15 14:34:02.667646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.714 [2024-07-15 14:34:02.667660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:29.714 [2024-07-15 14:34:02.671602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:29.714 [2024-07-15 14:34:02.671642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5fe30 (9): Bad file descriptor 00:17:29.714 [2024-07-15 14:34:02.707828] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:29.714 00:17:29.714 Latency(us) 00:17:29.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.714 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:29.714 Verification LBA range: start 0x0 length 0x4000 00:17:29.715 NVMe0n1 : 15.01 8763.25 34.23 216.20 0.00 14220.77 633.02 19660.80 00:17:29.715 =================================================================================================================== 00:17:29.715 Total : 8763.25 34.23 216.20 0.00 14220.77 633.02 19660.80 00:17:29.715 Received shutdown signal, test time was about 15.000000 seconds 00:17:29.715 00:17:29.715 Latency(us) 00:17:29.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.715 =================================================================================================================== 00:17:29.715 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:29.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88227 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88227 /var/tmp/bdevperf.sock 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88227 ']' 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:29.715 [2024-07-15 14:34:08.963633] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:29.715 14:34:08 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:29.715 [2024-07-15 14:34:09.239913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:29.715 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:29.973 NVMe0n1 00:17:30.231 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:30.489 00:17:30.489 14:34:09 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:30.747 00:17:30.747 14:34:10 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:30.747 14:34:10 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:31.005 14:34:10 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:31.571 14:34:10 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:34.870 14:34:13 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:34.870 14:34:13 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:34.870 14:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88356 00:17:34.870 14:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:34.870 14:34:14 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 88356 00:17:35.805 0 00:17:35.805 14:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:35.805 [2024-07-15 14:34:08.375740] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:35.805 [2024-07-15 14:34:08.375848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88227 ] 00:17:35.805 [2024-07-15 14:34:08.515960] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.805 [2024-07-15 14:34:08.574818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.805 [2024-07-15 14:34:10.850937] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:35.805 [2024-07-15 14:34:10.851049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.805 [2024-07-15 14:34:10.851075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.805 [2024-07-15 14:34:10.851094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.805 [2024-07-15 14:34:10.851107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.805 [2024-07-15 14:34:10.851121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.805 [2024-07-15 14:34:10.851134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.805 [2024-07-15 14:34:10.851148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:35.805 [2024-07-15 14:34:10.851161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.805 [2024-07-15 14:34:10.851175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:35.805 [2024-07-15 14:34:10.851216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:35.805 [2024-07-15 14:34:10.851246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x846e30 (9): Bad file descriptor 00:17:35.805 [2024-07-15 14:34:10.861661] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:35.805 Running I/O for 1 seconds... 00:17:35.805 00:17:35.805 Latency(us) 00:17:35.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.805 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:35.805 Verification LBA range: start 0x0 length 0x4000 00:17:35.805 NVMe0n1 : 1.01 8900.59 34.77 0.00 0.00 14299.88 2040.55 16205.27 00:17:35.805 =================================================================================================================== 00:17:35.805 Total : 8900.59 34.77 0.00 0.00 14299.88 2040.55 16205.27 00:17:35.805 14:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:35.805 14:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:36.063 14:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:36.320 14:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:36.320 14:34:15 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:36.577 14:34:16 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:36.835 14:34:16 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:40.172 14:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:40.172 14:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:40.172 14:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 88227 00:17:40.172 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88227 ']' 00:17:40.172 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88227 00:17:40.172 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:40.172 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:40.172 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88227 00:17:40.172 killing process with pid 88227 00:17:40.172 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:40.172 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:40.172 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88227' 00:17:40.172 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88227 00:17:40.172 14:34:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88227 00:17:40.464 14:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:40.464 14:34:19 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:40.722 rmmod nvme_tcp 00:17:40.722 rmmod nvme_fabrics 00:17:40.722 rmmod nvme_keyring 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87864 ']' 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87864 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87864 ']' 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87864 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87864 00:17:40.722 killing process with pid 87864 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87864' 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87864 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87864 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.722 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.980 14:34:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:40.980 00:17:40.980 real 0m32.393s 00:17:40.980 user 2m6.831s 00:17:40.980 sys 0m4.490s 00:17:40.980 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:40.980 ************************************ 00:17:40.980 END TEST nvmf_failover 00:17:40.980 14:34:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:40.980 ************************************ 00:17:40.980 14:34:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:40.980 14:34:20 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:40.980 14:34:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:40.980 14:34:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.980 14:34:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:40.980 ************************************ 00:17:40.980 START TEST nvmf_host_discovery 00:17:40.980 ************************************ 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:40.980 * Looking for test storage... 00:17:40.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:40.980 Cannot find device "nvmf_tgt_br" 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:40.980 Cannot find device "nvmf_tgt_br2" 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:40.980 Cannot find device "nvmf_tgt_br" 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:17:40.980 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:41.237 Cannot find device "nvmf_tgt_br2" 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:41.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:41.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:41.237 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:41.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:17:41.495 00:17:41.495 --- 10.0.0.2 ping statistics --- 00:17:41.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.495 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:41.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:41.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:17:41.495 00:17:41.495 --- 10.0.0.3 ping statistics --- 00:17:41.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.495 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:41.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:41.495 00:17:41.495 --- 10.0.0.1 ping statistics --- 00:17:41.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.495 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88653 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88653 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88653 ']' 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.495 14:34:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:41.495 [2024-07-15 14:34:20.938521] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:41.495 [2024-07-15 14:34:20.938609] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.495 [2024-07-15 14:34:21.078030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.753 [2024-07-15 14:34:21.149880] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.753 [2024-07-15 14:34:21.150163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.753 [2024-07-15 14:34:21.150265] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.753 [2024-07-15 14:34:21.150359] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.753 [2024-07-15 14:34:21.150435] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.753 [2024-07-15 14:34:21.150549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:41.753 [2024-07-15 14:34:21.281664] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:41.753 [2024-07-15 14:34:21.293809] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:41.753 null0 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:41.753 null1 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:41.753 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.754 14:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88691 00:17:41.754 14:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:41.754 14:34:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88691 /tmp/host.sock 00:17:41.754 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88691 ']' 00:17:41.754 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:41.754 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.754 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:41.754 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:41.754 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.754 14:34:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.012 [2024-07-15 14:34:21.385475] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:42.012 [2024-07-15 14:34:21.385573] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88691 ] 00:17:42.012 [2024-07-15 14:34:21.520934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.012 [2024-07-15 14:34:21.590579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:42.948 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:42.949 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.208 [2024-07-15 14:34:22.750185] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:43.208 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:43.467 14:34:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:43.468 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.468 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:17:43.468 14:34:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:17:44.046 [2024-07-15 14:34:23.397892] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:44.046 [2024-07-15 14:34:23.398082] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:44.046 [2024-07-15 14:34:23.398119] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:44.046 [2024-07-15 14:34:23.484021] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:44.046 [2024-07-15 14:34:23.541029] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:44.046 [2024-07-15 14:34:23.541064] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:44.613 14:34:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:44.613 14:34:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:44.613 14:34:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:44.613 14:34:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:44.613 14:34:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:44.613 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:44.613 14:34:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:44.613 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.614 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:44.873 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.874 [2024-07-15 14:34:24.358942] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:44.874 [2024-07-15 14:34:24.360170] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:44.874 [2024-07-15 14:34:24.360210] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:44.874 [2024-07-15 14:34:24.446232] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:17:44.874 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.133 [2024-07-15 14:34:24.511615] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:45.133 [2024-07-15 14:34:24.511648] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:45.133 [2024-07-15 14:34:24.511655] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:17:45.133 14:34:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.066 [2024-07-15 14:34:25.652627] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:46.066 [2024-07-15 14:34:25.652669] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.066 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:46.067 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:46.067 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:46.067 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:46.067 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:46.067 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:46.067 [2024-07-15 14:34:25.658615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.067 [2024-07-15 14:34:25.658644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.067 [2024-07-15 14:34:25.658657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.067 [2024-07-15 14:34:25.658667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.067 [2024-07-15 14:34:25.658677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.067 [2024-07-15 14:34:25.658686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.067 [2024-07-15 14:34:25.658708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:46.067 [2024-07-15 14:34:25.658719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:46.067 [2024-07-15 14:34:25.658729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30c50 is same with the state(5) to be set 00:17:46.067 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:46.067 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.330 [2024-07-15 14:34:25.668567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b30c50 (9): Bad file descriptor 00:17:46.330 [2024-07-15 14:34:25.678594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:46.330 [2024-07-15 14:34:25.678744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.330 [2024-07-15 14:34:25.678768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b30c50 with addr=10.0.0.2, port=4420 00:17:46.330 [2024-07-15 14:34:25.678781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30c50 is same with the state(5) to be set 00:17:46.330 [2024-07-15 14:34:25.678800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b30c50 (9): Bad file descriptor 00:17:46.330 [2024-07-15 14:34:25.678843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:46.330 [2024-07-15 14:34:25.678857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:46.330 [2024-07-15 14:34:25.678868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:46.330 [2024-07-15 14:34:25.678884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.330 [2024-07-15 14:34:25.688662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:46.330 [2024-07-15 14:34:25.688762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.330 [2024-07-15 14:34:25.688784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b30c50 with addr=10.0.0.2, port=4420 00:17:46.330 [2024-07-15 14:34:25.688795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30c50 is same with the state(5) to be set 00:17:46.330 [2024-07-15 14:34:25.688812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b30c50 (9): Bad file descriptor 00:17:46.330 [2024-07-15 14:34:25.688845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:46.330 [2024-07-15 14:34:25.688856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:46.330 [2024-07-15 14:34:25.688866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:46.330 [2024-07-15 14:34:25.688881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:46.330 [2024-07-15 14:34:25.698748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:46.330 [2024-07-15 14:34:25.698851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.330 [2024-07-15 14:34:25.698874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b30c50 with addr=10.0.0.2, port=4420 00:17:46.330 [2024-07-15 14:34:25.698892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30c50 is same with the state(5) to be set 00:17:46.330 [2024-07-15 14:34:25.698909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b30c50 (9): Bad file descriptor 00:17:46.330 [2024-07-15 14:34:25.698956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:46.330 [2024-07-15 14:34:25.698969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:46.330 [2024-07-15 14:34:25.698979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:46.330 [2024-07-15 14:34:25.698995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:46.330 [2024-07-15 14:34:25.708810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:46.330 [2024-07-15 14:34:25.708904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.330 [2024-07-15 14:34:25.708926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b30c50 with addr=10.0.0.2, port=4420 00:17:46.330 [2024-07-15 14:34:25.708937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30c50 is same with the state(5) to be set 00:17:46.330 [2024-07-15 14:34:25.708954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b30c50 (9): Bad file descriptor 00:17:46.330 [2024-07-15 14:34:25.708982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:46.330 [2024-07-15 14:34:25.708993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:46.330 [2024-07-15 14:34:25.709003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:46.330 [2024-07-15 14:34:25.709018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:46.330 [2024-07-15 14:34:25.718869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:46.330 [2024-07-15 14:34:25.718951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.330 [2024-07-15 14:34:25.718972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b30c50 with addr=10.0.0.2, port=4420 00:17:46.330 [2024-07-15 14:34:25.718983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30c50 is same with the state(5) to be set 00:17:46.330 [2024-07-15 14:34:25.718999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b30c50 (9): Bad file descriptor 00:17:46.330 [2024-07-15 14:34:25.719024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:46.330 [2024-07-15 14:34:25.719035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:46.330 [2024-07-15 14:34:25.719044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:46.330 [2024-07-15 14:34:25.719059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:46.330 [2024-07-15 14:34:25.728920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:46.330 [2024-07-15 14:34:25.729014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.330 [2024-07-15 14:34:25.729036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b30c50 with addr=10.0.0.2, port=4420 00:17:46.330 [2024-07-15 14:34:25.729047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30c50 is same with the state(5) to be set 00:17:46.330 [2024-07-15 14:34:25.729064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b30c50 (9): Bad file descriptor 00:17:46.330 [2024-07-15 14:34:25.729079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:46.330 [2024-07-15 14:34:25.729088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:46.330 [2024-07-15 14:34:25.729098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:46.330 [2024-07-15 14:34:25.729113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:46.330 [2024-07-15 14:34:25.738844] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:46.330 [2024-07-15 14:34:25.738875] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:46.330 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:46.331 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:46.589 14:34:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.589 14:34:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.523 [2024-07-15 14:34:27.088588] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:47.523 [2024-07-15 14:34:27.088619] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:47.523 [2024-07-15 14:34:27.088637] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:47.779 [2024-07-15 14:34:27.175736] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:47.779 [2024-07-15 14:34:27.236095] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:47.779 [2024-07-15 14:34:27.236166] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:47.779 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.779 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:47.779 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:47.779 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.780 2024/07/15 14:34:27 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:47.780 request: 00:17:47.780 { 00:17:47.780 "method": "bdev_nvme_start_discovery", 00:17:47.780 "params": { 00:17:47.780 "name": "nvme", 00:17:47.780 "trtype": "tcp", 00:17:47.780 "traddr": "10.0.0.2", 00:17:47.780 "adrfam": "ipv4", 00:17:47.780 "trsvcid": "8009", 00:17:47.780 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:47.780 "wait_for_attach": true 00:17:47.780 } 00:17:47.780 } 00:17:47.780 Got JSON-RPC error response 00:17:47.780 GoRPCClient: error on JSON-RPC call 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:47.780 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.038 2024/07/15 14:34:27 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:48.038 request: 00:17:48.038 { 00:17:48.038 "method": "bdev_nvme_start_discovery", 00:17:48.038 "params": { 00:17:48.038 "name": "nvme_second", 00:17:48.038 "trtype": "tcp", 00:17:48.038 "traddr": "10.0.0.2", 00:17:48.038 "adrfam": "ipv4", 00:17:48.038 "trsvcid": "8009", 00:17:48.038 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:48.038 "wait_for_attach": true 00:17:48.038 } 00:17:48.038 } 00:17:48.038 Got JSON-RPC error response 00:17:48.038 GoRPCClient: error on JSON-RPC call 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.038 14:34:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.971 [2024-07-15 14:34:28.516509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:48.971 [2024-07-15 14:34:28.516585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2cf00 with addr=10.0.0.2, port=8010 00:17:48.971 [2024-07-15 14:34:28.516607] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:48.971 [2024-07-15 14:34:28.516618] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:48.971 [2024-07-15 14:34:28.516627] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:50.342 [2024-07-15 14:34:29.516493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:50.342 [2024-07-15 14:34:29.516596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b2cf00 with addr=10.0.0.2, port=8010 00:17:50.342 [2024-07-15 14:34:29.516618] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:50.342 [2024-07-15 14:34:29.516628] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:50.342 [2024-07-15 14:34:29.516638] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:51.273 [2024-07-15 14:34:30.516351] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:17:51.273 2024/07/15 14:34:30 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:17:51.273 request: 00:17:51.273 { 00:17:51.273 "method": "bdev_nvme_start_discovery", 00:17:51.273 "params": { 00:17:51.273 "name": "nvme_second", 00:17:51.273 "trtype": "tcp", 00:17:51.273 "traddr": "10.0.0.2", 00:17:51.273 "adrfam": "ipv4", 00:17:51.273 "trsvcid": "8010", 00:17:51.273 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:51.273 "wait_for_attach": false, 00:17:51.273 "attach_timeout_ms": 3000 00:17:51.273 } 00:17:51.273 } 00:17:51.273 Got JSON-RPC error response 00:17:51.274 GoRPCClient: error on JSON-RPC call 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88691 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:51.274 rmmod nvme_tcp 00:17:51.274 rmmod nvme_fabrics 00:17:51.274 rmmod nvme_keyring 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88653 ']' 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88653 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 88653 ']' 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 88653 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88653 00:17:51.274 killing process with pid 88653 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88653' 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 88653 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 88653 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.274 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.571 14:34:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:51.571 00:17:51.571 real 0m10.475s 00:17:51.571 user 0m21.238s 00:17:51.571 sys 0m1.531s 00:17:51.571 ************************************ 00:17:51.571 END TEST nvmf_host_discovery 00:17:51.571 ************************************ 00:17:51.571 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.571 14:34:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:51.571 14:34:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:51.571 14:34:30 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:51.571 14:34:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:51.571 14:34:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.571 14:34:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:51.571 ************************************ 00:17:51.571 START TEST nvmf_host_multipath_status 00:17:51.571 ************************************ 00:17:51.571 14:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:51.571 * Looking for test storage... 00:17:51.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.571 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:51.572 Cannot find device "nvmf_tgt_br" 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.572 Cannot find device "nvmf_tgt_br2" 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:51.572 Cannot find device "nvmf_tgt_br" 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:51.572 Cannot find device "nvmf_tgt_br2" 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:17:51.572 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.867 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.867 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:51.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:17:51.867 00:17:51.867 --- 10.0.0.2 ping statistics --- 00:17:51.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.867 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:51.867 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:51.867 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:17:51.867 00:17:51.867 --- 10.0.0.3 ping statistics --- 00:17:51.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.867 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:51.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:51.867 00:17:51.867 --- 10.0.0.1 ping statistics --- 00:17:51.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.867 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=89177 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 89177 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89177 ']' 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.867 14:34:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:51.867 [2024-07-15 14:34:31.423810] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:17:51.867 [2024-07-15 14:34:31.423898] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.126 [2024-07-15 14:34:31.553454] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:52.126 [2024-07-15 14:34:31.612103] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.126 [2024-07-15 14:34:31.612301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.126 [2024-07-15 14:34:31.612487] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.126 [2024-07-15 14:34:31.612636] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.126 [2024-07-15 14:34:31.612674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.126 [2024-07-15 14:34:31.612897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.126 [2024-07-15 14:34:31.612904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.058 14:34:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.058 14:34:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:17:53.058 14:34:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.058 14:34:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:53.058 14:34:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:53.058 14:34:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.058 14:34:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89177 00:17:53.058 14:34:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:53.314 [2024-07-15 14:34:32.673112] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.314 14:34:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:53.572 Malloc0 00:17:53.572 14:34:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:53.831 14:34:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:53.831 14:34:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.089 [2024-07-15 14:34:33.683425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.347 14:34:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:54.347 [2024-07-15 14:34:33.927521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:54.605 14:34:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89279 00:17:54.605 14:34:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:54.605 14:34:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:54.605 14:34:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89279 /var/tmp/bdevperf.sock 00:17:54.605 14:34:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89279 ']' 00:17:54.605 14:34:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:54.605 14:34:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:54.605 14:34:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:54.605 14:34:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.605 14:34:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:55.538 14:34:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.538 14:34:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:17:55.538 14:34:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:55.817 14:34:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:56.074 Nvme0n1 00:17:56.075 14:34:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:56.639 Nvme0n1 00:17:56.639 14:34:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:56.639 14:34:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:58.645 14:34:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:58.645 14:34:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:58.903 14:34:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:58.903 14:34:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:00.273 14:34:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:00.273 14:34:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:00.273 14:34:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.273 14:34:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:00.273 14:34:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.273 14:34:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:00.273 14:34:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:00.273 14:34:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.531 14:34:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:00.531 14:34:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:00.531 14:34:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.531 14:34:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:00.789 14:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.789 14:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:00.789 14:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.789 14:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:01.047 14:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.047 14:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:01.047 14:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:01.047 14:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.305 14:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.305 14:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:01.305 14:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.305 14:34:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:01.563 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.563 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:01.563 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:01.820 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:02.079 14:34:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:03.037 14:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:03.037 14:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:03.037 14:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.037 14:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:03.295 14:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:03.295 14:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:03.295 14:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.295 14:34:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:03.553 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.553 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:03.553 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.553 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:04.119 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.119 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:04.119 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:04.119 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.119 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.119 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:04.119 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.119 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:04.377 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.377 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:04.377 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.377 14:34:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:04.653 14:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.653 14:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:04.653 14:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:04.911 14:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:05.169 14:34:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:06.544 14:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:06.544 14:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:06.544 14:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.544 14:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:06.544 14:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.544 14:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:06.544 14:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.544 14:34:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:06.802 14:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:06.802 14:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:06.802 14:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.802 14:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:07.121 14:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:07.121 14:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:07.121 14:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:07.121 14:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:07.379 14:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:07.379 14:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:07.379 14:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:07.379 14:34:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:07.638 14:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:07.638 14:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:07.638 14:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:07.638 14:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:07.896 14:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:07.896 14:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:07.896 14:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:08.155 14:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:08.413 14:34:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:09.348 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:09.348 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:09.348 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.348 14:34:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:09.916 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:09.916 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:09.916 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.916 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:09.916 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:09.916 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:09.916 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:09.916 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:10.175 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:10.175 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:10.175 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:10.175 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:10.433 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:10.433 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:10.433 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:10.433 14:34:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:10.690 14:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:10.690 14:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:10.690 14:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:10.690 14:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:10.947 14:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:10.947 14:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:10.947 14:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:11.206 14:34:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:11.464 14:34:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:12.838 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:12.838 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:12.838 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:12.838 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:12.838 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:12.838 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:12.838 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:12.838 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:13.095 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:13.095 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:13.095 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:13.095 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:13.352 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:13.352 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:13.352 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:13.352 14:34:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:13.609 14:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:13.609 14:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:13.609 14:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:13.609 14:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:13.868 14:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:13.868 14:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:13.868 14:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:13.868 14:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:14.184 14:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:14.184 14:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:14.184 14:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:14.441 14:34:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:14.699 14:34:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:15.636 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:15.636 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:15.636 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.636 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:15.895 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:15.895 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:15.895 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:15.895 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:16.461 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.461 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:16.461 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.461 14:34:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:16.461 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:16.461 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:16.461 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:16.461 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:17.028 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:17.028 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:17.028 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:17.028 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:17.311 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:17.311 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:17.311 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:17.311 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:17.600 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:17.600 14:34:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:17.881 14:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:17.881 14:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:17.881 14:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:18.139 14:34:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:19.514 14:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:19.514 14:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:19.514 14:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.514 14:34:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:19.514 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.514 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:19.514 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:19.514 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.772 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:19.772 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:19.772 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:19.772 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:20.031 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:20.031 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:20.031 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:20.031 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:20.289 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:20.289 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:20.289 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:20.289 14:34:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:20.547 14:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:20.547 14:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:20.547 14:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:20.547 14:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:20.805 14:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:20.805 14:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:20.805 14:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:21.064 14:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:21.322 14:35:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:22.697 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:22.697 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:22.697 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.697 14:35:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:22.697 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:22.697 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:22.697 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:22.697 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:22.955 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:22.955 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:22.955 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:22.955 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.212 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.212 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:23.212 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:23.212 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.474 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.474 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:23.474 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.474 14:35:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:23.731 14:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.731 14:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:23.731 14:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:23.731 14:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:23.987 14:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:23.987 14:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:23.987 14:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:24.244 14:35:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:24.501 14:35:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:25.874 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:25.874 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:25.874 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.874 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:25.874 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:25.874 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:25.874 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:25.874 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:26.131 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.131 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:26.131 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.131 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:26.388 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.388 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:26.388 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.388 14:35:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:26.646 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.646 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:26.646 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:26.646 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:26.904 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:26.904 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:26.904 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:26.904 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:27.161 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:27.161 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:27.161 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:27.418 14:35:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:27.676 14:35:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:28.610 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:28.610 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:28.610 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:28.610 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.867 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:28.867 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:28.867 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:28.867 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:29.125 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:29.125 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:29.125 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:29.125 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.418 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.418 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:29.418 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.418 14:35:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:29.674 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.674 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:29.674 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.674 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:29.931 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:29.931 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:29.931 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:29.931 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:30.189 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:30.189 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89279 00:18:30.189 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89279 ']' 00:18:30.189 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89279 00:18:30.459 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:30.460 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.460 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89279 00:18:30.460 killing process with pid 89279 00:18:30.460 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:30.460 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:30.460 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89279' 00:18:30.460 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89279 00:18:30.460 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89279 00:18:30.460 Connection closed with partial response: 00:18:30.460 00:18:30.460 00:18:30.460 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89279 00:18:30.460 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:30.460 [2024-07-15 14:34:33.990346] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:30.460 [2024-07-15 14:34:33.990453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89279 ] 00:18:30.460 [2024-07-15 14:34:34.125184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.460 [2024-07-15 14:34:34.194362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.460 Running I/O for 90 seconds... 00:18:30.460 [2024-07-15 14:34:50.751725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.751821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.751891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.751912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.751936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.751951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.751973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.751987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.752009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.752023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.752045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.752059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.752080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.752095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.752116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.752131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.752152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.752167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.752188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.752202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.752228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.752266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.752289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.752304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.752325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.752339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.752360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.752374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.752395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.752409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.752430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.752444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.752467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.752481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.460 [2024-07-15 14:34:50.753942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.460 [2024-07-15 14:34:50.753974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.753989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.754975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.754989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.755966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.755984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:30.461 [2024-07-15 14:34:50.756011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.461 [2024-07-15 14:34:50.756025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.756971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.756998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.462 [2024-07-15 14:34:50.757783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:30.462 [2024-07-15 14:34:50.757808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:34:50.757823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:34:50.757849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:34:50.757864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:34:50.757890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:34:50.757917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:34:50.757944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:34:50.757959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:34:50.757985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:34:50.758000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:34:50.758026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:34:50.758044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:34:50.758072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:34:50.758087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:34:50.758126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:34:50.758143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:34:50.758170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:34:50.758184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:34:50.758211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:34:50.758226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.090057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.090126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.090173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.090192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.090213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.090228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.090249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.090263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.090294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.090308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.090355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.090371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.090391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.090406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.090427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.090441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.463 [2024-07-15 14:35:07.093836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.463 [2024-07-15 14:35:07.093871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.093965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.093980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.094001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.094015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.094036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.094059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.094082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.094097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.094118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.463 [2024-07-15 14:35:07.094132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.463 [2024-07-15 14:35:07.094166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.094181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.094202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.094217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.094243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.464 [2024-07-15 14:35:07.094257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.094278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.464 [2024-07-15 14:35:07.094293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.095984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.095999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.096019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.464 [2024-07-15 14:35:07.096033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.096055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.464 [2024-07-15 14:35:07.096069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.096090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.464 [2024-07-15 14:35:07.096104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.096125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.464 [2024-07-15 14:35:07.096139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.096160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.464 [2024-07-15 14:35:07.096174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.464 [2024-07-15 14:35:07.096202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.096217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.096239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.096253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.096274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.096288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.096309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.096324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.096345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.096359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.096380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.096394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.096415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.096430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.096450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.096465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.096486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.096500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.096521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.096536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.096557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.096571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.096592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.096606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.096627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.096648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.097216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.097258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.097294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.097330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.097708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.097890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.097912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.465 [2024-07-15 14:35:07.097927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.099086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.099114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.099141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.099156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.099179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.099194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.099215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.099240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.099263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.099278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.099299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.099314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.099335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.099349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.099370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.099384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.099405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.465 [2024-07-15 14:35:07.099419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.465 [2024-07-15 14:35:07.099440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.099454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.099475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.099489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.099510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.099524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.099545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.099559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.099580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.099594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.099615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.099629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.099650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.099665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.099708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.099726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.099749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.099764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.099785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.099799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.099820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.099835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.099856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.099871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.101964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.101985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.101999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.102035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.102070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.102113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.102162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.102199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.102235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.102270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.102305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.466 [2024-07-15 14:35:07.102340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.102375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.102410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.102448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.466 [2024-07-15 14:35:07.102483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:30.466 [2024-07-15 14:35:07.102504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.102518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.102539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.102561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.102583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.102597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.102618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.102633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.102654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.102668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.102689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.102715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.104518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.104546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.104574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.104590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.104612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.104627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.104649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.104663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.104684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.104713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.104737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.104751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.104772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.104786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.104807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.104822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.104856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.104871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.104892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.104906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.104927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.104941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.104962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.104976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.104997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.105011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.105584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.105607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.105621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.106209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.106237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.106264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.106280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.106301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.106315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.106336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.467 [2024-07-15 14:35:07.106363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.106386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.106401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.106422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.106436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.106457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.467 [2024-07-15 14:35:07.106471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.467 [2024-07-15 14:35:07.106492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.106506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.106526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.106540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.106561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.106575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.106596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.106610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.106631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.106645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.106666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.106679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.106715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.106732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.106754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.106768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.106789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.106811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.106833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.106848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.106868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.106883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.106904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.106918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.106938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.106953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.106974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.106988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.107024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.107059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.107499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.107540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.107575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.107611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.107647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.107710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.107749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.107784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.107820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.107855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.107890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.107932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.107967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.107988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.108003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.109431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.109458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.109484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.109500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.109522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.109536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.109569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.109585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.109606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.468 [2024-07-15 14:35:07.109620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.109641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.109655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.468 [2024-07-15 14:35:07.109676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.468 [2024-07-15 14:35:07.109690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.109726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.109741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.109762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.109776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.109799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.109813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.109834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.109848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.109869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.109883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.109904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.109918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.109940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.109954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.109975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.109989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.110010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.110032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.110055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.110069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.110090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.110104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.110126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.110151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.110174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.110189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.110210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.110224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.112682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.112734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.112770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.112805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.112876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.112911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.112965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.112986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.113000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.113022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.113036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.113057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.469 [2024-07-15 14:35:07.113071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.113092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.113107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.113128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.113143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.113792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.113819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.113846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.469 [2024-07-15 14:35:07.113862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:30.469 [2024-07-15 14:35:07.113883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.113898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.113919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.113933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.113955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.113969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.113990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.114815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.114858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.114978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.114999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.115013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.115034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.115048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.115070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.115084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.115105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.115119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.115140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.115154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.115175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.115189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.115211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.115225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.115246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.115260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.115281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.115295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.115863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.115899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.115926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.115942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.115963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.115977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.115998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.116013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.116034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.116048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.116068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.116083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.116104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.116118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.116139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.116153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.116174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.116188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.116209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.116223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.116244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.116258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.116279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.116293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.116314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.470 [2024-07-15 14:35:07.116335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.116358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.470 [2024-07-15 14:35:07.116373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:30.470 [2024-07-15 14:35:07.116394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.116408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.116429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.116443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.116464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.116478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.116499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.116513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.116534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.116548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.116569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.116583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.120624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.120663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.120693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.120727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.120750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.120765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.120786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.120800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.120822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.120836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.120870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.120885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.120906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.120921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.120942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.120957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.120977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.120992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.121063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.121098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.121134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.121204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.121625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.471 [2024-07-15 14:35:07.121639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.122170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.122197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.122225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.122242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.122263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.122290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.122312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.122327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.122348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.122363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.122384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.122398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.122419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.122433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.122454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.122468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.122489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.471 [2024-07-15 14:35:07.122503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.471 [2024-07-15 14:35:07.122524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.122538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.122559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.122573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.122594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.122608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.122629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.122644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.122664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.122678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.122713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.122741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.122763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.122778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.122799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.122813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.122834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.122849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.122870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.122884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.122905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.122919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.122940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.122954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.122975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.122989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.123010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.123024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.123045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.123059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.123080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.123094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.123116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.123131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.123689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.123728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.123767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.123784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.123806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.123820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.123841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.123855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.123876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.123890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.123911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.123925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.123946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.123960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.123981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.123995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.124016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.124030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.124051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.124065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.124086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.124100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.124121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.124134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.124155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.124170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.124199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.124214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.124234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.124248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.124269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.124283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.124304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.472 [2024-07-15 14:35:07.124318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.124339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.124353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.124374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.124388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.124409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.472 [2024-07-15 14:35:07.124423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:30.472 [2024-07-15 14:35:07.124444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.124458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.124478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.124492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.124513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.124528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.124549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.124564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.125454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.125480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.125506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.125533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.125555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.125570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.125591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.125605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.125626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.125640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.125661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.125675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.125711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.125729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.125750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.125764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.125785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.125799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.125820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.125834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.125855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.125869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.125890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.125904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.125925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.125939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.126501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.126545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.126580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.126616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.126651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.126686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.473 [2024-07-15 14:35:07.126740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.126776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.126819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.126855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.126890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.126926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.126961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.126991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.127006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.128617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.128654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.128682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.128710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.128734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.128749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.128770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.128784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.128805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.128819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.128840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.128854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.128875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.128889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.128910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.128924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.128946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.128960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.128980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.129001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:30.473 [2024-07-15 14:35:07.129021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.473 [2024-07-15 14:35:07.129036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.129084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.129119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.129154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.129189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.129224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.129259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.129294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.129329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.129364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.129399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.129434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.129469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.129512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.129547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.129582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.129617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.129639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.129653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.131924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.131964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.131994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.132011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.132046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.132082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.132117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.132152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.132187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.132235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.132272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.132308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.132343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.132378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.474 [2024-07-15 14:35:07.132413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.132449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.132484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.132519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.132555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.132590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.132625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.132667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.132717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.132755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.132791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.132826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:30.474 [2024-07-15 14:35:07.132847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.474 [2024-07-15 14:35:07.132861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.132882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.132897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.132918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.132932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.132953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.132968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.132989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.133003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.133023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.133038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.133059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.133073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.133094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.133109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.133138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.133153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.133174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.133189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.133210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.133224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.133246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.133260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.133870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.133897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.133923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.133939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.133961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.133975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.133996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.134011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.134031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.134046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.134067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.134081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.134102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.134116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.134147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.134163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.134197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.134213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.134233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.134248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.134269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.134284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.134306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.134320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.134342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.134356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.134927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.134963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.134993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.135010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.135045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.135081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.135117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.135152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.135187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.135235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.135272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.135308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.135343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.135379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.135414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.135449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.135485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.135520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.475 [2024-07-15 14:35:07.135555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:30.475 [2024-07-15 14:35:07.135577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.475 [2024-07-15 14:35:07.135591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.135612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.135626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.135647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.135668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.135691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.135720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.136176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.136211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.136240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.136256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.136278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.136292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.136313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.136327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.136348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.136363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.136384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.136398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.136419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.136434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.139427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.139462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.139498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.476 [2024-07-15 14:35:07.139808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.139844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.139879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.139900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.139915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.140473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.140509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.140538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.140554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.140576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.140591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.140612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.140626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.140647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.476 [2024-07-15 14:35:07.140661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:30.476 [2024-07-15 14:35:07.140682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.140710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.140735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.140749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.140771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.140797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.140820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.140834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.140855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.140870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.140891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.140905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.140926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.140940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.140961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.140976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.140997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.141011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.141032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.141046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.141068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.141082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.141103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.141118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.141139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.141153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.141174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.141189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.141209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.141231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.141253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.141268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.141289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.141304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.143293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.143339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.143374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.143410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.143445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.143480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.143516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.143551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.143586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.143622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.143671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.143722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.143759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.143794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.143830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.143865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.143900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.143935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.143971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.143992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.144006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.144027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.144041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.144062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.144077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.144106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.144121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.144142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.477 [2024-07-15 14:35:07.144156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.144177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.477 [2024-07-15 14:35:07.144191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:30.477 [2024-07-15 14:35:07.144212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.144226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.144247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.144262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.144283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.144297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.144318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.144332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.144354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.144368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.144389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.144403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.144424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.144439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.144459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.144474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.144495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.144509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.144531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.144552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.146414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.146479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.146516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.146551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.146587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.146625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.146661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.146708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.146747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.146782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.146818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.146865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.146902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.146938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.146973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.146994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.147009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.147030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.147044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.147065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:30.478 [2024-07-15 14:35:07.147079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.147101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.147115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.147136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.147151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.147171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.147186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.147207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.147222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:30.478 [2024-07-15 14:35:07.147243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:30.478 [2024-07-15 14:35:07.147257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:30.478 Received shutdown signal, test time was about 33.722045 seconds 00:18:30.478 00:18:30.478 Latency(us) 00:18:30.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.478 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:30.478 Verification LBA range: start 0x0 length 0x4000 00:18:30.478 Nvme0n1 : 33.72 8321.74 32.51 0.00 0.00 15349.68 1154.33 4026531.84 00:18:30.478 =================================================================================================================== 00:18:30.478 Total : 8321.74 32.51 0.00 0.00 15349.68 1154.33 4026531.84 00:18:30.478 14:35:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:30.736 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:30.736 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:30.736 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:30.736 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:30.736 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:18:30.736 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:30.736 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:18:30.736 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:30.736 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:30.994 rmmod nvme_tcp 00:18:30.994 rmmod nvme_fabrics 00:18:30.994 rmmod nvme_keyring 00:18:30.994 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:30.994 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:18:30.994 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:18:30.994 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 89177 ']' 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 89177 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89177 ']' 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89177 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89177 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:30.995 killing process with pid 89177 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89177' 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89177 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89177 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.995 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.254 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:31.254 00:18:31.254 real 0m39.693s 00:18:31.254 user 2m10.619s 00:18:31.254 sys 0m9.245s 00:18:31.254 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:31.254 14:35:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:31.254 ************************************ 00:18:31.254 END TEST nvmf_host_multipath_status 00:18:31.254 ************************************ 00:18:31.254 14:35:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:31.254 14:35:10 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:31.254 14:35:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:31.254 14:35:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.254 14:35:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:31.254 ************************************ 00:18:31.254 START TEST nvmf_discovery_remove_ifc 00:18:31.254 ************************************ 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:31.254 * Looking for test storage... 00:18:31.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:31.254 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:31.254 Cannot find device "nvmf_tgt_br" 00:18:31.255 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:18:31.255 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:31.255 Cannot find device "nvmf_tgt_br2" 00:18:31.255 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:18:31.255 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:31.255 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:31.513 Cannot find device "nvmf_tgt_br" 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:31.513 Cannot find device "nvmf_tgt_br2" 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:31.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:31.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:31.513 14:35:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:31.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:18:31.513 00:18:31.513 --- 10.0.0.2 ping statistics --- 00:18:31.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.513 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:31.513 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:31.513 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:18:31.513 00:18:31.513 --- 10.0.0.3 ping statistics --- 00:18:31.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.513 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:31.513 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:31.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:18:31.513 00:18:31.513 --- 10.0.0.1 ping statistics --- 00:18:31.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.513 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90582 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90582 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90582 ']' 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.772 14:35:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:31.772 [2024-07-15 14:35:11.196596] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:31.772 [2024-07-15 14:35:11.196687] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.772 [2024-07-15 14:35:11.328805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.030 [2024-07-15 14:35:11.406126] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.030 [2024-07-15 14:35:11.406195] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.030 [2024-07-15 14:35:11.406208] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.030 [2024-07-15 14:35:11.406216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.030 [2024-07-15 14:35:11.406223] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.030 [2024-07-15 14:35:11.406250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.596 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.596 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:32.596 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.596 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.596 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:32.596 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.596 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:32.854 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.854 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:32.854 [2024-07-15 14:35:12.208621] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.854 [2024-07-15 14:35:12.216769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:32.854 null0 00:18:32.854 [2024-07-15 14:35:12.248688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.854 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.854 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90632 00:18:32.854 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:32.854 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90632 /tmp/host.sock 00:18:32.854 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90632 ']' 00:18:32.854 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:32.854 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.854 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:32.855 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:32.855 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.855 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:32.855 [2024-07-15 14:35:12.324326] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:18:32.855 [2024-07-15 14:35:12.324447] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90632 ] 00:18:33.112 [2024-07-15 14:35:12.456624] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.112 [2024-07-15 14:35:12.526748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.112 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.112 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:33.113 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:33.113 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:33.113 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.113 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:33.113 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.113 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:33.113 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.113 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:33.113 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.113 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:33.113 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.113 14:35:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:34.487 [2024-07-15 14:35:13.662100] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:34.487 [2024-07-15 14:35:13.662177] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:34.487 [2024-07-15 14:35:13.662237] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:34.487 [2024-07-15 14:35:13.748303] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:34.487 [2024-07-15 14:35:13.805788] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:34.487 [2024-07-15 14:35:13.805884] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:34.487 [2024-07-15 14:35:13.805918] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:34.487 [2024-07-15 14:35:13.805939] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:34.487 [2024-07-15 14:35:13.805970] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:34.487 [2024-07-15 14:35:13.810304] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1938650 was disconnected and freed. delete nvme_qpair. 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:34.487 14:35:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:35.445 14:35:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:35.445 14:35:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:35.445 14:35:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:35.445 14:35:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.445 14:35:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:35.445 14:35:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:35.445 14:35:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:35.445 14:35:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.445 14:35:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:35.445 14:35:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:36.817 14:35:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:36.817 14:35:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.817 14:35:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:36.817 14:35:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.817 14:35:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:36.817 14:35:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:36.817 14:35:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:36.817 14:35:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.817 14:35:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:36.817 14:35:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:37.749 14:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:37.749 14:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.749 14:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:37.749 14:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:37.749 14:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:37.749 14:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.749 14:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:37.749 14:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.749 14:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:37.749 14:35:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:38.681 14:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:38.681 14:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:38.681 14:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.681 14:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:38.681 14:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:38.681 14:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:38.681 14:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:38.681 14:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.681 14:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:38.681 14:35:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:39.613 14:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:39.613 14:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:39.613 14:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:39.613 14:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:39.613 14:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.613 14:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:39.613 14:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:39.613 14:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.871 14:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:39.871 14:35:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:39.871 [2024-07-15 14:35:19.233582] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:39.871 [2024-07-15 14:35:19.233675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.871 [2024-07-15 14:35:19.233691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.871 [2024-07-15 14:35:19.233705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.871 [2024-07-15 14:35:19.233743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.871 [2024-07-15 14:35:19.233755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.871 [2024-07-15 14:35:19.233765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.871 [2024-07-15 14:35:19.233775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.871 [2024-07-15 14:35:19.233784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.871 [2024-07-15 14:35:19.233795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:39.871 [2024-07-15 14:35:19.233804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:39.871 [2024-07-15 14:35:19.233814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901900 is same with the state(5) to be set 00:18:39.871 [2024-07-15 14:35:19.243573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1901900 (9): Bad file descriptor 00:18:39.871 [2024-07-15 14:35:19.253621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:40.826 14:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:40.826 14:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:40.827 14:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.827 14:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:40.827 14:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:40.827 14:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:40.827 14:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:40.827 [2024-07-15 14:35:20.305837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:40.827 [2024-07-15 14:35:20.306328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1901900 with addr=10.0.0.2, port=4420 00:18:40.827 [2024-07-15 14:35:20.306595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1901900 is same with the state(5) to be set 00:18:40.827 [2024-07-15 14:35:20.306675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1901900 (9): Bad file descriptor 00:18:40.827 [2024-07-15 14:35:20.307596] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:40.827 [2024-07-15 14:35:20.307677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:40.827 [2024-07-15 14:35:20.307726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:40.827 [2024-07-15 14:35:20.307750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:40.827 [2024-07-15 14:35:20.307817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:40.827 [2024-07-15 14:35:20.307844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:40.827 14:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.827 14:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:40.827 14:35:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:41.765 [2024-07-15 14:35:21.307908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:41.765 [2024-07-15 14:35:21.307990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:41.765 [2024-07-15 14:35:21.308019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:41.765 [2024-07-15 14:35:21.308029] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:18:41.765 [2024-07-15 14:35:21.308053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:41.765 [2024-07-15 14:35:21.308083] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:18:41.765 [2024-07-15 14:35:21.308149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.765 [2024-07-15 14:35:21.308166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.765 [2024-07-15 14:35:21.308180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.765 [2024-07-15 14:35:21.308190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.765 [2024-07-15 14:35:21.308200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.765 [2024-07-15 14:35:21.308210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.765 [2024-07-15 14:35:21.308220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.765 [2024-07-15 14:35:21.308229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.765 [2024-07-15 14:35:21.308240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.765 [2024-07-15 14:35:21.308249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.765 [2024-07-15 14:35:21.308259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:18:41.765 [2024-07-15 14:35:21.308829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a43e0 (9): Bad file descriptor 00:18:41.765 [2024-07-15 14:35:21.309839] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:41.765 [2024-07-15 14:35:21.309861] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:18:41.765 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:41.765 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:41.765 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.765 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:41.765 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:41.765 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:41.765 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:41.765 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:42.022 14:35:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:42.955 14:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:42.955 14:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:42.955 14:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:42.955 14:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.955 14:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:42.955 14:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:42.955 14:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:42.955 14:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.955 14:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:42.955 14:35:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:43.888 [2024-07-15 14:35:23.313577] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:43.888 [2024-07-15 14:35:23.313606] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:43.888 [2024-07-15 14:35:23.313625] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:43.888 [2024-07-15 14:35:23.401743] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:18:43.888 [2024-07-15 14:35:23.464835] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:43.888 [2024-07-15 14:35:23.464895] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:43.888 [2024-07-15 14:35:23.464920] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:43.888 [2024-07-15 14:35:23.464937] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:18:43.888 [2024-07-15 14:35:23.464947] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:43.888 [2024-07-15 14:35:23.472096] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x191d300 was disconnected and freed. delete nvme_qpair. 00:18:44.146 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:44.146 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:44.146 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.146 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:44.146 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:44.146 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90632 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90632 ']' 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90632 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90632 00:18:44.147 killing process with pid 90632 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90632' 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90632 00:18:44.147 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90632 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:44.405 rmmod nvme_tcp 00:18:44.405 rmmod nvme_fabrics 00:18:44.405 rmmod nvme_keyring 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90582 ']' 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90582 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90582 ']' 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90582 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90582 00:18:44.405 killing process with pid 90582 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90582' 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90582 00:18:44.405 14:35:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90582 00:18:44.663 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:44.663 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:44.663 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:44.663 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.663 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.663 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.663 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.663 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.663 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:44.663 00:18:44.663 real 0m13.427s 00:18:44.663 user 0m23.845s 00:18:44.663 sys 0m1.504s 00:18:44.663 ************************************ 00:18:44.663 END TEST nvmf_discovery_remove_ifc 00:18:44.663 ************************************ 00:18:44.663 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:44.663 14:35:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:44.663 14:35:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:44.663 14:35:24 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:44.663 14:35:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:44.663 14:35:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.663 14:35:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:44.663 ************************************ 00:18:44.664 START TEST nvmf_identify_kernel_target 00:18:44.664 ************************************ 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:44.664 * Looking for test storage... 00:18:44.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:44.664 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:44.922 Cannot find device "nvmf_tgt_br" 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.922 Cannot find device "nvmf_tgt_br2" 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:44.922 Cannot find device "nvmf_tgt_br" 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:44.922 Cannot find device "nvmf_tgt_br2" 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:44.922 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:44.923 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:44.923 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:44.923 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:44.923 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:44.923 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:44.923 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:44.923 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:45.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:18:45.181 00:18:45.181 --- 10.0.0.2 ping statistics --- 00:18:45.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.181 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:45.181 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:45.181 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:18:45.181 00:18:45.181 --- 10.0.0.3 ping statistics --- 00:18:45.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.181 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:45.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:18:45.181 00:18:45.181 --- 10.0.0.1 ping statistics --- 00:18:45.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.181 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:45.181 14:35:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:45.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:45.439 Waiting for block devices as requested 00:18:45.697 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:45.697 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:45.697 No valid GPT data, bailing 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:45.697 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:45.956 No valid GPT data, bailing 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:45.956 No valid GPT data, bailing 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:45.956 No valid GPT data, bailing 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -a 10.0.0.1 -t tcp -s 4420 00:18:45.956 00:18:45.956 Discovery Log Number of Records 2, Generation counter 2 00:18:45.956 =====Discovery Log Entry 0====== 00:18:45.956 trtype: tcp 00:18:45.956 adrfam: ipv4 00:18:45.956 subtype: current discovery subsystem 00:18:45.956 treq: not specified, sq flow control disable supported 00:18:45.956 portid: 1 00:18:45.956 trsvcid: 4420 00:18:45.956 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:45.956 traddr: 10.0.0.1 00:18:45.956 eflags: none 00:18:45.956 sectype: none 00:18:45.956 =====Discovery Log Entry 1====== 00:18:45.956 trtype: tcp 00:18:45.956 adrfam: ipv4 00:18:45.956 subtype: nvme subsystem 00:18:45.956 treq: not specified, sq flow control disable supported 00:18:45.956 portid: 1 00:18:45.956 trsvcid: 4420 00:18:45.956 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:45.956 traddr: 10.0.0.1 00:18:45.956 eflags: none 00:18:45.956 sectype: none 00:18:45.956 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:45.956 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:46.214 ===================================================== 00:18:46.214 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:46.214 ===================================================== 00:18:46.214 Controller Capabilities/Features 00:18:46.214 ================================ 00:18:46.214 Vendor ID: 0000 00:18:46.214 Subsystem Vendor ID: 0000 00:18:46.214 Serial Number: 66db9371531b16e3a893 00:18:46.214 Model Number: Linux 00:18:46.214 Firmware Version: 6.7.0-68 00:18:46.215 Recommended Arb Burst: 0 00:18:46.215 IEEE OUI Identifier: 00 00 00 00:18:46.215 Multi-path I/O 00:18:46.215 May have multiple subsystem ports: No 00:18:46.215 May have multiple controllers: No 00:18:46.215 Associated with SR-IOV VF: No 00:18:46.215 Max Data Transfer Size: Unlimited 00:18:46.215 Max Number of Namespaces: 0 00:18:46.215 Max Number of I/O Queues: 1024 00:18:46.215 NVMe Specification Version (VS): 1.3 00:18:46.215 NVMe Specification Version (Identify): 1.3 00:18:46.215 Maximum Queue Entries: 1024 00:18:46.215 Contiguous Queues Required: No 00:18:46.215 Arbitration Mechanisms Supported 00:18:46.215 Weighted Round Robin: Not Supported 00:18:46.215 Vendor Specific: Not Supported 00:18:46.215 Reset Timeout: 7500 ms 00:18:46.215 Doorbell Stride: 4 bytes 00:18:46.215 NVM Subsystem Reset: Not Supported 00:18:46.215 Command Sets Supported 00:18:46.215 NVM Command Set: Supported 00:18:46.215 Boot Partition: Not Supported 00:18:46.215 Memory Page Size Minimum: 4096 bytes 00:18:46.215 Memory Page Size Maximum: 4096 bytes 00:18:46.215 Persistent Memory Region: Not Supported 00:18:46.215 Optional Asynchronous Events Supported 00:18:46.215 Namespace Attribute Notices: Not Supported 00:18:46.215 Firmware Activation Notices: Not Supported 00:18:46.215 ANA Change Notices: Not Supported 00:18:46.215 PLE Aggregate Log Change Notices: Not Supported 00:18:46.215 LBA Status Info Alert Notices: Not Supported 00:18:46.215 EGE Aggregate Log Change Notices: Not Supported 00:18:46.215 Normal NVM Subsystem Shutdown event: Not Supported 00:18:46.215 Zone Descriptor Change Notices: Not Supported 00:18:46.215 Discovery Log Change Notices: Supported 00:18:46.215 Controller Attributes 00:18:46.215 128-bit Host Identifier: Not Supported 00:18:46.215 Non-Operational Permissive Mode: Not Supported 00:18:46.215 NVM Sets: Not Supported 00:18:46.215 Read Recovery Levels: Not Supported 00:18:46.215 Endurance Groups: Not Supported 00:18:46.215 Predictable Latency Mode: Not Supported 00:18:46.215 Traffic Based Keep ALive: Not Supported 00:18:46.215 Namespace Granularity: Not Supported 00:18:46.215 SQ Associations: Not Supported 00:18:46.215 UUID List: Not Supported 00:18:46.215 Multi-Domain Subsystem: Not Supported 00:18:46.215 Fixed Capacity Management: Not Supported 00:18:46.215 Variable Capacity Management: Not Supported 00:18:46.215 Delete Endurance Group: Not Supported 00:18:46.215 Delete NVM Set: Not Supported 00:18:46.215 Extended LBA Formats Supported: Not Supported 00:18:46.215 Flexible Data Placement Supported: Not Supported 00:18:46.215 00:18:46.215 Controller Memory Buffer Support 00:18:46.215 ================================ 00:18:46.215 Supported: No 00:18:46.215 00:18:46.215 Persistent Memory Region Support 00:18:46.215 ================================ 00:18:46.215 Supported: No 00:18:46.215 00:18:46.215 Admin Command Set Attributes 00:18:46.215 ============================ 00:18:46.215 Security Send/Receive: Not Supported 00:18:46.215 Format NVM: Not Supported 00:18:46.215 Firmware Activate/Download: Not Supported 00:18:46.215 Namespace Management: Not Supported 00:18:46.215 Device Self-Test: Not Supported 00:18:46.215 Directives: Not Supported 00:18:46.215 NVMe-MI: Not Supported 00:18:46.215 Virtualization Management: Not Supported 00:18:46.215 Doorbell Buffer Config: Not Supported 00:18:46.215 Get LBA Status Capability: Not Supported 00:18:46.215 Command & Feature Lockdown Capability: Not Supported 00:18:46.215 Abort Command Limit: 1 00:18:46.215 Async Event Request Limit: 1 00:18:46.215 Number of Firmware Slots: N/A 00:18:46.215 Firmware Slot 1 Read-Only: N/A 00:18:46.215 Firmware Activation Without Reset: N/A 00:18:46.215 Multiple Update Detection Support: N/A 00:18:46.215 Firmware Update Granularity: No Information Provided 00:18:46.215 Per-Namespace SMART Log: No 00:18:46.215 Asymmetric Namespace Access Log Page: Not Supported 00:18:46.215 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:46.215 Command Effects Log Page: Not Supported 00:18:46.215 Get Log Page Extended Data: Supported 00:18:46.215 Telemetry Log Pages: Not Supported 00:18:46.215 Persistent Event Log Pages: Not Supported 00:18:46.215 Supported Log Pages Log Page: May Support 00:18:46.215 Commands Supported & Effects Log Page: Not Supported 00:18:46.215 Feature Identifiers & Effects Log Page:May Support 00:18:46.215 NVMe-MI Commands & Effects Log Page: May Support 00:18:46.215 Data Area 4 for Telemetry Log: Not Supported 00:18:46.215 Error Log Page Entries Supported: 1 00:18:46.215 Keep Alive: Not Supported 00:18:46.215 00:18:46.215 NVM Command Set Attributes 00:18:46.215 ========================== 00:18:46.215 Submission Queue Entry Size 00:18:46.215 Max: 1 00:18:46.215 Min: 1 00:18:46.215 Completion Queue Entry Size 00:18:46.215 Max: 1 00:18:46.215 Min: 1 00:18:46.215 Number of Namespaces: 0 00:18:46.215 Compare Command: Not Supported 00:18:46.215 Write Uncorrectable Command: Not Supported 00:18:46.215 Dataset Management Command: Not Supported 00:18:46.215 Write Zeroes Command: Not Supported 00:18:46.215 Set Features Save Field: Not Supported 00:18:46.215 Reservations: Not Supported 00:18:46.215 Timestamp: Not Supported 00:18:46.215 Copy: Not Supported 00:18:46.215 Volatile Write Cache: Not Present 00:18:46.215 Atomic Write Unit (Normal): 1 00:18:46.215 Atomic Write Unit (PFail): 1 00:18:46.215 Atomic Compare & Write Unit: 1 00:18:46.215 Fused Compare & Write: Not Supported 00:18:46.215 Scatter-Gather List 00:18:46.215 SGL Command Set: Supported 00:18:46.215 SGL Keyed: Not Supported 00:18:46.215 SGL Bit Bucket Descriptor: Not Supported 00:18:46.215 SGL Metadata Pointer: Not Supported 00:18:46.215 Oversized SGL: Not Supported 00:18:46.215 SGL Metadata Address: Not Supported 00:18:46.215 SGL Offset: Supported 00:18:46.215 Transport SGL Data Block: Not Supported 00:18:46.215 Replay Protected Memory Block: Not Supported 00:18:46.215 00:18:46.215 Firmware Slot Information 00:18:46.215 ========================= 00:18:46.215 Active slot: 0 00:18:46.215 00:18:46.215 00:18:46.215 Error Log 00:18:46.215 ========= 00:18:46.215 00:18:46.215 Active Namespaces 00:18:46.215 ================= 00:18:46.215 Discovery Log Page 00:18:46.215 ================== 00:18:46.215 Generation Counter: 2 00:18:46.215 Number of Records: 2 00:18:46.215 Record Format: 0 00:18:46.215 00:18:46.215 Discovery Log Entry 0 00:18:46.215 ---------------------- 00:18:46.215 Transport Type: 3 (TCP) 00:18:46.215 Address Family: 1 (IPv4) 00:18:46.215 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:46.215 Entry Flags: 00:18:46.215 Duplicate Returned Information: 0 00:18:46.215 Explicit Persistent Connection Support for Discovery: 0 00:18:46.215 Transport Requirements: 00:18:46.215 Secure Channel: Not Specified 00:18:46.215 Port ID: 1 (0x0001) 00:18:46.215 Controller ID: 65535 (0xffff) 00:18:46.215 Admin Max SQ Size: 32 00:18:46.215 Transport Service Identifier: 4420 00:18:46.215 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:46.215 Transport Address: 10.0.0.1 00:18:46.215 Discovery Log Entry 1 00:18:46.215 ---------------------- 00:18:46.215 Transport Type: 3 (TCP) 00:18:46.215 Address Family: 1 (IPv4) 00:18:46.215 Subsystem Type: 2 (NVM Subsystem) 00:18:46.215 Entry Flags: 00:18:46.215 Duplicate Returned Information: 0 00:18:46.215 Explicit Persistent Connection Support for Discovery: 0 00:18:46.215 Transport Requirements: 00:18:46.215 Secure Channel: Not Specified 00:18:46.215 Port ID: 1 (0x0001) 00:18:46.215 Controller ID: 65535 (0xffff) 00:18:46.215 Admin Max SQ Size: 32 00:18:46.215 Transport Service Identifier: 4420 00:18:46.215 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:46.215 Transport Address: 10.0.0.1 00:18:46.215 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:46.474 get_feature(0x01) failed 00:18:46.474 get_feature(0x02) failed 00:18:46.474 get_feature(0x04) failed 00:18:46.474 ===================================================== 00:18:46.474 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:46.474 ===================================================== 00:18:46.474 Controller Capabilities/Features 00:18:46.474 ================================ 00:18:46.474 Vendor ID: 0000 00:18:46.474 Subsystem Vendor ID: 0000 00:18:46.474 Serial Number: 07af9848bd7e62060029 00:18:46.474 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:46.474 Firmware Version: 6.7.0-68 00:18:46.474 Recommended Arb Burst: 6 00:18:46.474 IEEE OUI Identifier: 00 00 00 00:18:46.474 Multi-path I/O 00:18:46.474 May have multiple subsystem ports: Yes 00:18:46.474 May have multiple controllers: Yes 00:18:46.474 Associated with SR-IOV VF: No 00:18:46.474 Max Data Transfer Size: Unlimited 00:18:46.474 Max Number of Namespaces: 1024 00:18:46.474 Max Number of I/O Queues: 128 00:18:46.474 NVMe Specification Version (VS): 1.3 00:18:46.474 NVMe Specification Version (Identify): 1.3 00:18:46.474 Maximum Queue Entries: 1024 00:18:46.474 Contiguous Queues Required: No 00:18:46.474 Arbitration Mechanisms Supported 00:18:46.474 Weighted Round Robin: Not Supported 00:18:46.474 Vendor Specific: Not Supported 00:18:46.474 Reset Timeout: 7500 ms 00:18:46.474 Doorbell Stride: 4 bytes 00:18:46.474 NVM Subsystem Reset: Not Supported 00:18:46.474 Command Sets Supported 00:18:46.474 NVM Command Set: Supported 00:18:46.474 Boot Partition: Not Supported 00:18:46.474 Memory Page Size Minimum: 4096 bytes 00:18:46.474 Memory Page Size Maximum: 4096 bytes 00:18:46.474 Persistent Memory Region: Not Supported 00:18:46.474 Optional Asynchronous Events Supported 00:18:46.474 Namespace Attribute Notices: Supported 00:18:46.474 Firmware Activation Notices: Not Supported 00:18:46.474 ANA Change Notices: Supported 00:18:46.474 PLE Aggregate Log Change Notices: Not Supported 00:18:46.474 LBA Status Info Alert Notices: Not Supported 00:18:46.474 EGE Aggregate Log Change Notices: Not Supported 00:18:46.474 Normal NVM Subsystem Shutdown event: Not Supported 00:18:46.474 Zone Descriptor Change Notices: Not Supported 00:18:46.474 Discovery Log Change Notices: Not Supported 00:18:46.474 Controller Attributes 00:18:46.474 128-bit Host Identifier: Supported 00:18:46.474 Non-Operational Permissive Mode: Not Supported 00:18:46.474 NVM Sets: Not Supported 00:18:46.474 Read Recovery Levels: Not Supported 00:18:46.474 Endurance Groups: Not Supported 00:18:46.474 Predictable Latency Mode: Not Supported 00:18:46.474 Traffic Based Keep ALive: Supported 00:18:46.474 Namespace Granularity: Not Supported 00:18:46.474 SQ Associations: Not Supported 00:18:46.474 UUID List: Not Supported 00:18:46.474 Multi-Domain Subsystem: Not Supported 00:18:46.474 Fixed Capacity Management: Not Supported 00:18:46.474 Variable Capacity Management: Not Supported 00:18:46.474 Delete Endurance Group: Not Supported 00:18:46.474 Delete NVM Set: Not Supported 00:18:46.474 Extended LBA Formats Supported: Not Supported 00:18:46.474 Flexible Data Placement Supported: Not Supported 00:18:46.474 00:18:46.474 Controller Memory Buffer Support 00:18:46.474 ================================ 00:18:46.474 Supported: No 00:18:46.474 00:18:46.474 Persistent Memory Region Support 00:18:46.474 ================================ 00:18:46.474 Supported: No 00:18:46.474 00:18:46.474 Admin Command Set Attributes 00:18:46.474 ============================ 00:18:46.474 Security Send/Receive: Not Supported 00:18:46.474 Format NVM: Not Supported 00:18:46.474 Firmware Activate/Download: Not Supported 00:18:46.474 Namespace Management: Not Supported 00:18:46.474 Device Self-Test: Not Supported 00:18:46.475 Directives: Not Supported 00:18:46.475 NVMe-MI: Not Supported 00:18:46.475 Virtualization Management: Not Supported 00:18:46.475 Doorbell Buffer Config: Not Supported 00:18:46.475 Get LBA Status Capability: Not Supported 00:18:46.475 Command & Feature Lockdown Capability: Not Supported 00:18:46.475 Abort Command Limit: 4 00:18:46.475 Async Event Request Limit: 4 00:18:46.475 Number of Firmware Slots: N/A 00:18:46.475 Firmware Slot 1 Read-Only: N/A 00:18:46.475 Firmware Activation Without Reset: N/A 00:18:46.475 Multiple Update Detection Support: N/A 00:18:46.475 Firmware Update Granularity: No Information Provided 00:18:46.475 Per-Namespace SMART Log: Yes 00:18:46.475 Asymmetric Namespace Access Log Page: Supported 00:18:46.475 ANA Transition Time : 10 sec 00:18:46.475 00:18:46.475 Asymmetric Namespace Access Capabilities 00:18:46.475 ANA Optimized State : Supported 00:18:46.475 ANA Non-Optimized State : Supported 00:18:46.475 ANA Inaccessible State : Supported 00:18:46.475 ANA Persistent Loss State : Supported 00:18:46.475 ANA Change State : Supported 00:18:46.475 ANAGRPID is not changed : No 00:18:46.475 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:46.475 00:18:46.475 ANA Group Identifier Maximum : 128 00:18:46.475 Number of ANA Group Identifiers : 128 00:18:46.475 Max Number of Allowed Namespaces : 1024 00:18:46.475 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:46.475 Command Effects Log Page: Supported 00:18:46.475 Get Log Page Extended Data: Supported 00:18:46.475 Telemetry Log Pages: Not Supported 00:18:46.475 Persistent Event Log Pages: Not Supported 00:18:46.475 Supported Log Pages Log Page: May Support 00:18:46.475 Commands Supported & Effects Log Page: Not Supported 00:18:46.475 Feature Identifiers & Effects Log Page:May Support 00:18:46.475 NVMe-MI Commands & Effects Log Page: May Support 00:18:46.475 Data Area 4 for Telemetry Log: Not Supported 00:18:46.475 Error Log Page Entries Supported: 128 00:18:46.475 Keep Alive: Supported 00:18:46.475 Keep Alive Granularity: 1000 ms 00:18:46.475 00:18:46.475 NVM Command Set Attributes 00:18:46.475 ========================== 00:18:46.475 Submission Queue Entry Size 00:18:46.475 Max: 64 00:18:46.475 Min: 64 00:18:46.475 Completion Queue Entry Size 00:18:46.475 Max: 16 00:18:46.475 Min: 16 00:18:46.475 Number of Namespaces: 1024 00:18:46.475 Compare Command: Not Supported 00:18:46.475 Write Uncorrectable Command: Not Supported 00:18:46.475 Dataset Management Command: Supported 00:18:46.475 Write Zeroes Command: Supported 00:18:46.475 Set Features Save Field: Not Supported 00:18:46.475 Reservations: Not Supported 00:18:46.475 Timestamp: Not Supported 00:18:46.475 Copy: Not Supported 00:18:46.475 Volatile Write Cache: Present 00:18:46.475 Atomic Write Unit (Normal): 1 00:18:46.475 Atomic Write Unit (PFail): 1 00:18:46.475 Atomic Compare & Write Unit: 1 00:18:46.475 Fused Compare & Write: Not Supported 00:18:46.475 Scatter-Gather List 00:18:46.475 SGL Command Set: Supported 00:18:46.475 SGL Keyed: Not Supported 00:18:46.475 SGL Bit Bucket Descriptor: Not Supported 00:18:46.475 SGL Metadata Pointer: Not Supported 00:18:46.475 Oversized SGL: Not Supported 00:18:46.475 SGL Metadata Address: Not Supported 00:18:46.475 SGL Offset: Supported 00:18:46.475 Transport SGL Data Block: Not Supported 00:18:46.475 Replay Protected Memory Block: Not Supported 00:18:46.475 00:18:46.475 Firmware Slot Information 00:18:46.475 ========================= 00:18:46.475 Active slot: 0 00:18:46.475 00:18:46.475 Asymmetric Namespace Access 00:18:46.475 =========================== 00:18:46.475 Change Count : 0 00:18:46.475 Number of ANA Group Descriptors : 1 00:18:46.475 ANA Group Descriptor : 0 00:18:46.475 ANA Group ID : 1 00:18:46.475 Number of NSID Values : 1 00:18:46.475 Change Count : 0 00:18:46.475 ANA State : 1 00:18:46.475 Namespace Identifier : 1 00:18:46.475 00:18:46.475 Commands Supported and Effects 00:18:46.475 ============================== 00:18:46.475 Admin Commands 00:18:46.475 -------------- 00:18:46.475 Get Log Page (02h): Supported 00:18:46.475 Identify (06h): Supported 00:18:46.475 Abort (08h): Supported 00:18:46.475 Set Features (09h): Supported 00:18:46.475 Get Features (0Ah): Supported 00:18:46.475 Asynchronous Event Request (0Ch): Supported 00:18:46.475 Keep Alive (18h): Supported 00:18:46.475 I/O Commands 00:18:46.475 ------------ 00:18:46.475 Flush (00h): Supported 00:18:46.475 Write (01h): Supported LBA-Change 00:18:46.475 Read (02h): Supported 00:18:46.475 Write Zeroes (08h): Supported LBA-Change 00:18:46.475 Dataset Management (09h): Supported 00:18:46.475 00:18:46.475 Error Log 00:18:46.475 ========= 00:18:46.475 Entry: 0 00:18:46.475 Error Count: 0x3 00:18:46.475 Submission Queue Id: 0x0 00:18:46.475 Command Id: 0x5 00:18:46.475 Phase Bit: 0 00:18:46.475 Status Code: 0x2 00:18:46.475 Status Code Type: 0x0 00:18:46.475 Do Not Retry: 1 00:18:46.475 Error Location: 0x28 00:18:46.475 LBA: 0x0 00:18:46.475 Namespace: 0x0 00:18:46.475 Vendor Log Page: 0x0 00:18:46.475 ----------- 00:18:46.475 Entry: 1 00:18:46.475 Error Count: 0x2 00:18:46.475 Submission Queue Id: 0x0 00:18:46.475 Command Id: 0x5 00:18:46.475 Phase Bit: 0 00:18:46.475 Status Code: 0x2 00:18:46.475 Status Code Type: 0x0 00:18:46.475 Do Not Retry: 1 00:18:46.475 Error Location: 0x28 00:18:46.475 LBA: 0x0 00:18:46.475 Namespace: 0x0 00:18:46.475 Vendor Log Page: 0x0 00:18:46.475 ----------- 00:18:46.475 Entry: 2 00:18:46.475 Error Count: 0x1 00:18:46.475 Submission Queue Id: 0x0 00:18:46.475 Command Id: 0x4 00:18:46.475 Phase Bit: 0 00:18:46.475 Status Code: 0x2 00:18:46.475 Status Code Type: 0x0 00:18:46.475 Do Not Retry: 1 00:18:46.475 Error Location: 0x28 00:18:46.475 LBA: 0x0 00:18:46.475 Namespace: 0x0 00:18:46.475 Vendor Log Page: 0x0 00:18:46.475 00:18:46.475 Number of Queues 00:18:46.475 ================ 00:18:46.475 Number of I/O Submission Queues: 128 00:18:46.475 Number of I/O Completion Queues: 128 00:18:46.475 00:18:46.475 ZNS Specific Controller Data 00:18:46.475 ============================ 00:18:46.475 Zone Append Size Limit: 0 00:18:46.475 00:18:46.475 00:18:46.475 Active Namespaces 00:18:46.475 ================= 00:18:46.475 get_feature(0x05) failed 00:18:46.475 Namespace ID:1 00:18:46.475 Command Set Identifier: NVM (00h) 00:18:46.475 Deallocate: Supported 00:18:46.475 Deallocated/Unwritten Error: Not Supported 00:18:46.475 Deallocated Read Value: Unknown 00:18:46.475 Deallocate in Write Zeroes: Not Supported 00:18:46.475 Deallocated Guard Field: 0xFFFF 00:18:46.475 Flush: Supported 00:18:46.475 Reservation: Not Supported 00:18:46.475 Namespace Sharing Capabilities: Multiple Controllers 00:18:46.475 Size (in LBAs): 1310720 (5GiB) 00:18:46.475 Capacity (in LBAs): 1310720 (5GiB) 00:18:46.475 Utilization (in LBAs): 1310720 (5GiB) 00:18:46.475 UUID: 8040a383-b071-4b51-be07-03306cf27253 00:18:46.475 Thin Provisioning: Not Supported 00:18:46.475 Per-NS Atomic Units: Yes 00:18:46.475 Atomic Boundary Size (Normal): 0 00:18:46.475 Atomic Boundary Size (PFail): 0 00:18:46.475 Atomic Boundary Offset: 0 00:18:46.475 NGUID/EUI64 Never Reused: No 00:18:46.475 ANA group ID: 1 00:18:46.475 Namespace Write Protected: No 00:18:46.475 Number of LBA Formats: 1 00:18:46.475 Current LBA Format: LBA Format #00 00:18:46.475 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:46.475 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:46.475 rmmod nvme_tcp 00:18:46.475 rmmod nvme_fabrics 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:46.475 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.476 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.476 14:35:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.476 14:35:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:46.476 14:35:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:46.476 14:35:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:46.476 14:35:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:18:46.476 14:35:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:46.476 14:35:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:46.476 14:35:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:46.476 14:35:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:46.476 14:35:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:46.476 14:35:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:46.476 14:35:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:47.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:47.410 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:47.410 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:47.410 ************************************ 00:18:47.410 END TEST nvmf_identify_kernel_target 00:18:47.410 ************************************ 00:18:47.410 00:18:47.410 real 0m2.743s 00:18:47.410 user 0m0.919s 00:18:47.410 sys 0m1.313s 00:18:47.410 14:35:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:47.410 14:35:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.410 14:35:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:47.410 14:35:26 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:47.410 14:35:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:47.410 14:35:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.410 14:35:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:47.410 ************************************ 00:18:47.410 START TEST nvmf_auth_host 00:18:47.410 ************************************ 00:18:47.410 14:35:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:47.668 * Looking for test storage... 00:18:47.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.668 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:47.669 Cannot find device "nvmf_tgt_br" 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:47.669 Cannot find device "nvmf_tgt_br2" 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:47.669 Cannot find device "nvmf_tgt_br" 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:47.669 Cannot find device "nvmf_tgt_br2" 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:47.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:47.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:47.669 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:47.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:18:47.927 00:18:47.927 --- 10.0.0.2 ping statistics --- 00:18:47.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.927 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:47.927 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:47.927 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:18:47.927 00:18:47.927 --- 10.0.0.3 ping statistics --- 00:18:47.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.927 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:47.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:47.927 00:18:47.927 --- 10.0.0.1 ping statistics --- 00:18:47.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.927 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.927 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91512 00:18:47.928 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:47.928 14:35:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91512 00:18:47.928 14:35:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91512 ']' 00:18:47.928 14:35:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.928 14:35:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.928 14:35:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.928 14:35:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.928 14:35:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=56931a15f035c9d94da46a1c1d824053 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.77N 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 56931a15f035c9d94da46a1c1d824053 0 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 56931a15f035c9d94da46a1c1d824053 0 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=56931a15f035c9d94da46a1c1d824053 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.77N 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.77N 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.77N 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f16499517e15612a7d66ae3220a6c58214a7f041958739068296e7c3807e9eca 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.3ZD 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f16499517e15612a7d66ae3220a6c58214a7f041958739068296e7c3807e9eca 3 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f16499517e15612a7d66ae3220a6c58214a7f041958739068296e7c3807e9eca 3 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:49.301 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f16499517e15612a7d66ae3220a6c58214a7f041958739068296e7c3807e9eca 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.3ZD 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.3ZD 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.3ZD 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2ce6c5f3061a519ff95a674863a098e882c4eeff2133c81e 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kYl 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2ce6c5f3061a519ff95a674863a098e882c4eeff2133c81e 0 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2ce6c5f3061a519ff95a674863a098e882c4eeff2133c81e 0 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2ce6c5f3061a519ff95a674863a098e882c4eeff2133c81e 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kYl 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kYl 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.kYl 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=45341354b368bf7e6796b0c0c8da4dbf9a5cfb1e3d0eafe0 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.DPl 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 45341354b368bf7e6796b0c0c8da4dbf9a5cfb1e3d0eafe0 2 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 45341354b368bf7e6796b0c0c8da4dbf9a5cfb1e3d0eafe0 2 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=45341354b368bf7e6796b0c0c8da4dbf9a5cfb1e3d0eafe0 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.DPl 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.DPl 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.DPl 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c690db87c98cc8accdab0ee54816a6c0 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MFH 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c690db87c98cc8accdab0ee54816a6c0 1 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c690db87c98cc8accdab0ee54816a6c0 1 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c690db87c98cc8accdab0ee54816a6c0 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MFH 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MFH 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.MFH 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0124eca68b33060acdc07502c6eacfd3 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Crl 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0124eca68b33060acdc07502c6eacfd3 1 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0124eca68b33060acdc07502c6eacfd3 1 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0124eca68b33060acdc07502c6eacfd3 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Crl 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Crl 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Crl 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=322c2839321de8b08f6b0d265edefe219a03eafc541c84a9 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4iW 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 322c2839321de8b08f6b0d265edefe219a03eafc541c84a9 2 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 322c2839321de8b08f6b0d265edefe219a03eafc541c84a9 2 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:49.302 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=322c2839321de8b08f6b0d265edefe219a03eafc541c84a9 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4iW 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4iW 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4iW 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0a6ee05fb8187ffc9095ecab4b3027d7 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.WY8 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0a6ee05fb8187ffc9095ecab4b3027d7 0 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0a6ee05fb8187ffc9095ecab4b3027d7 0 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0a6ee05fb8187ffc9095ecab4b3027d7 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:49.561 14:35:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.WY8 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.WY8 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.WY8 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=78077e88306304cce905e6bd56d435417bc3ef33f860754181ff08104ceb8892 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.S4Y 00:18:49.561 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 78077e88306304cce905e6bd56d435417bc3ef33f860754181ff08104ceb8892 3 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 78077e88306304cce905e6bd56d435417bc3ef33f860754181ff08104ceb8892 3 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=78077e88306304cce905e6bd56d435417bc3ef33f860754181ff08104ceb8892 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.S4Y 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.S4Y 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.S4Y 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91512 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91512 ']' 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.562 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.77N 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.3ZD ]] 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3ZD 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.kYl 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.820 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.DPl ]] 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DPl 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.MFH 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Crl ]] 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Crl 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.821 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4iW 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.WY8 ]] 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.WY8 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.S4Y 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:50.079 14:35:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:50.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:50.336 Waiting for block devices as requested 00:18:50.336 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:50.593 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:51.158 No valid GPT data, bailing 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:51.158 No valid GPT data, bailing 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:51.158 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:51.159 No valid GPT data, bailing 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:51.159 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:51.418 No valid GPT data, bailing 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 --hostid=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 -a 10.0.0.1 -t tcp -s 4420 00:18:51.418 00:18:51.418 Discovery Log Number of Records 2, Generation counter 2 00:18:51.418 =====Discovery Log Entry 0====== 00:18:51.418 trtype: tcp 00:18:51.418 adrfam: ipv4 00:18:51.418 subtype: current discovery subsystem 00:18:51.418 treq: not specified, sq flow control disable supported 00:18:51.418 portid: 1 00:18:51.418 trsvcid: 4420 00:18:51.418 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:51.418 traddr: 10.0.0.1 00:18:51.418 eflags: none 00:18:51.418 sectype: none 00:18:51.418 =====Discovery Log Entry 1====== 00:18:51.418 trtype: tcp 00:18:51.418 adrfam: ipv4 00:18:51.418 subtype: nvme subsystem 00:18:51.418 treq: not specified, sq flow control disable supported 00:18:51.418 portid: 1 00:18:51.418 trsvcid: 4420 00:18:51.418 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:51.418 traddr: 10.0.0.1 00:18:51.418 eflags: none 00:18:51.418 sectype: none 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:51.418 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.419 14:35:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.682 nvme0n1 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.682 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.683 nvme0n1 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.683 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.964 nvme0n1 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.964 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.965 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.223 nvme0n1 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.223 nvme0n1 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.223 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.482 nvme0n1 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.482 14:35:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:52.482 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.740 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.998 nvme0n1 00:18:52.998 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.998 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.998 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.998 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.998 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.998 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.998 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.998 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.998 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.998 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.998 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.999 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.257 nvme0n1 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.257 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.516 nvme0n1 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.516 14:35:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.516 nvme0n1 00:18:53.516 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.516 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.516 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.516 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.516 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.516 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.775 nvme0n1 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:53.775 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.710 14:35:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.710 nvme0n1 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.710 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.969 nvme0n1 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.969 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.227 nvme0n1 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.227 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.486 nvme0n1 00:18:55.486 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.486 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.486 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.486 14:35:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.486 14:35:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:55.486 14:35:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.743 nvme0n1 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.743 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:55.744 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:55.744 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:55.744 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:18:55.744 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:18:55.744 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:55.744 14:35:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.643 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.208 nvme0n1 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.208 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.209 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.467 nvme0n1 00:18:58.467 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.467 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.467 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.467 14:35:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.467 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.467 14:35:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.467 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.750 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.750 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.750 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.750 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.750 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.750 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.750 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.750 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.750 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.018 nvme0n1 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:59.018 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.019 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.276 nvme0n1 00:18:59.276 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.276 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.276 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.276 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.276 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.533 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.534 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.534 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.534 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.534 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.534 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.534 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.534 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.534 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.534 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.534 14:35:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.534 14:35:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:59.534 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.534 14:35:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.791 nvme0n1 00:18:59.791 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.791 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.791 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.791 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.791 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.791 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.791 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.792 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.726 nvme0n1 00:19:00.726 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.726 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.726 14:35:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.726 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.726 14:35:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:00.726 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.727 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.292 nvme0n1 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.292 14:35:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.858 nvme0n1 00:19:01.858 14:35:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.858 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.858 14:35:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.858 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:01.858 14:35:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.858 14:35:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.858 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.858 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.858 14:35:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.858 14:35:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.116 14:35:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.683 nvme0n1 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.683 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.684 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.249 nvme0n1 00:19:03.249 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.249 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.249 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.249 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.250 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.508 14:35:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.508 nvme0n1 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.508 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.767 nvme0n1 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.767 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.025 nvme0n1 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.025 nvme0n1 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.025 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.026 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.284 nvme0n1 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.284 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.543 nvme0n1 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.543 14:35:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.543 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.543 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.543 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.543 nvme0n1 00:19:04.543 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.543 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.543 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.543 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.543 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.543 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.802 nvme0n1 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:04.802 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.803 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.061 nvme0n1 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.061 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.319 nvme0n1 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:05.319 14:35:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:05.320 14:35:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.320 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.320 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.577 nvme0n1 00:19:05.577 14:35:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.577 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.835 nvme0n1 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.835 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.093 nvme0n1 00:19:06.093 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.093 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.094 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.351 nvme0n1 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.351 14:35:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.608 nvme0n1 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:19:06.608 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:06.609 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:06.609 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:06.609 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:06.609 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:06.609 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:06.609 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:06.609 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:06.609 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.609 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.866 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.124 nvme0n1 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.124 14:35:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.743 nvme0n1 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:07.743 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.744 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.018 nvme0n1 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:08.018 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.019 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.585 nvme0n1 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.585 14:35:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.848 nvme0n1 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.848 14:35:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.415 nvme0n1 00:19:09.415 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.415 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:09.415 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.673 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.238 nvme0n1 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.238 14:35:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.172 nvme0n1 00:19:11.172 14:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.172 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.172 14:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.172 14:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.172 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:11.172 14:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.173 14:35:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.739 nvme0n1 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.739 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.305 nvme0n1 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.305 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.564 14:35:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.564 nvme0n1 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.564 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.822 nvme0n1 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:12.822 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.823 nvme0n1 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:12.823 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.082 nvme0n1 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:13.082 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:13.083 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:13.083 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.083 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.083 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:13.083 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.083 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:13.083 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:13.083 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:13.083 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:13.083 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.083 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.341 nvme0n1 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:13.341 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.342 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.342 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.601 nvme0n1 00:19:13.601 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.601 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.601 14:35:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.601 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.601 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.601 14:35:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.601 nvme0n1 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.601 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.859 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.860 nvme0n1 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.860 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.119 nvme0n1 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.119 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.378 nvme0n1 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.378 14:35:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.637 nvme0n1 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.637 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.895 nvme0n1 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:14.895 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:14.896 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.896 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.896 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.154 nvme0n1 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.154 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.412 nvme0n1 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:15.412 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.413 14:35:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.671 nvme0n1 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.671 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.236 nvme0n1 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.236 14:35:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.494 nvme0n1 00:19:16.494 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.494 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.494 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.494 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.494 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.494 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.752 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.752 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.752 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.752 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.752 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.753 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.011 nvme0n1 00:19:17.011 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.011 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.011 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.011 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.011 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.011 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.012 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.579 nvme0n1 00:19:17.579 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.579 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.579 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.579 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.579 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.579 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.579 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.579 14:35:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.579 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.579 14:35:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.579 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.837 nvme0n1 00:19:17.837 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.837 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.837 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.837 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.837 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.837 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.837 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.837 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.837 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.837 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY5MzFhMTVmMDM1YzlkOTRkYTQ2YTFjMWQ4MjQwNTMquNo0: 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: ]] 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE2NDk5NTE3ZTE1NjEyYTdkNjZhZTMyMjBhNmM1ODIxNGE3ZjA0MTk1ODczOTA2ODI5NmU3YzM4MDdlOWVjYTp49IY=: 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.096 14:35:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.661 nvme0n1 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.661 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.226 nvme0n1 00:19:19.226 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.226 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.226 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.226 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.226 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:19.226 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzY5MGRiODdjOThjYzhhY2NkYWIwZWU1NDgxNmE2YzDyDdwg: 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: ]] 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDEyNGVjYTY4YjMzMDYwYWNkYzA3NTAyYzZlYWNmZDPPAuT2: 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:19.484 14:35:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:19.485 14:35:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.485 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.485 14:35:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.051 nvme0n1 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzIyYzI4MzkzMjFkZThiMDhmNmIwZDI2NWVkZWZlMjE5YTAzZWFmYzU0MWM4NGE59i8URw==: 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: ]] 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGE2ZWUwNWZiODE4N2ZmYzkwOTVlY2FiNGIzMDI3ZDcF33xd: 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.051 14:35:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.616 nvme0n1 00:19:20.616 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.616 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.616 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.616 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.616 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.616 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwNzdlODgzMDYzMDRjY2U5MDVlNmJkNTZkNDM1NDE3YmMzZWYzM2Y4NjA3NTQxODFmZjA4MTA0Y2ViODg5MlJod2c=: 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.873 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.438 nvme0n1 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmNlNmM1ZjMwNjFhNTE5ZmY5NWE2NzQ4NjNhMDk4ZTg4MmM0ZWVmZjIxMzNjODFlXwQ6bQ==: 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: ]] 00:19:21.438 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDUzNDEzNTRiMzY4YmY3ZTY3OTZiMGMwYzhkYTRkYmY5YTVjZmIxZTNkMGVhZmUw1xVT4w==: 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.439 14:36:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.439 2024/07/15 14:36:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:21.439 request: 00:19:21.439 { 00:19:21.439 "method": "bdev_nvme_attach_controller", 00:19:21.439 "params": { 00:19:21.439 "name": "nvme0", 00:19:21.439 "trtype": "tcp", 00:19:21.439 "traddr": "10.0.0.1", 00:19:21.439 "adrfam": "ipv4", 00:19:21.439 "trsvcid": "4420", 00:19:21.439 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:21.439 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:21.439 "prchk_reftag": false, 00:19:21.439 "prchk_guard": false, 00:19:21.439 "hdgst": false, 00:19:21.439 "ddgst": false 00:19:21.439 } 00:19:21.439 } 00:19:21.439 Got JSON-RPC error response 00:19:21.439 GoRPCClient: error on JSON-RPC call 00:19:21.439 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:21.439 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:21.439 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:21.439 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:21.439 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:21.439 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:21.439 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.439 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.439 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.439 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.698 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:21.698 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:21.698 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.699 2024/07/15 14:36:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:21.699 request: 00:19:21.699 { 00:19:21.699 "method": "bdev_nvme_attach_controller", 00:19:21.699 "params": { 00:19:21.699 "name": "nvme0", 00:19:21.699 "trtype": "tcp", 00:19:21.699 "traddr": "10.0.0.1", 00:19:21.699 "adrfam": "ipv4", 00:19:21.699 "trsvcid": "4420", 00:19:21.699 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:21.699 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:21.699 "prchk_reftag": false, 00:19:21.699 "prchk_guard": false, 00:19:21.699 "hdgst": false, 00:19:21.699 "ddgst": false, 00:19:21.699 "dhchap_key": "key2" 00:19:21.699 } 00:19:21.699 } 00:19:21.699 Got JSON-RPC error response 00:19:21.699 GoRPCClient: error on JSON-RPC call 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.699 2024/07/15 14:36:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:21.699 request: 00:19:21.699 { 00:19:21.699 "method": "bdev_nvme_attach_controller", 00:19:21.699 "params": { 00:19:21.699 "name": "nvme0", 00:19:21.699 "trtype": "tcp", 00:19:21.699 "traddr": "10.0.0.1", 00:19:21.699 "adrfam": "ipv4", 00:19:21.699 "trsvcid": "4420", 00:19:21.699 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:21.699 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:21.699 "prchk_reftag": false, 00:19:21.699 "prchk_guard": false, 00:19:21.699 "hdgst": false, 00:19:21.699 "ddgst": false, 00:19:21.699 "dhchap_key": "key1", 00:19:21.699 "dhchap_ctrlr_key": "ckey2" 00:19:21.699 } 00:19:21.699 } 00:19:21.699 Got JSON-RPC error response 00:19:21.699 GoRPCClient: error on JSON-RPC call 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:21.699 rmmod nvme_tcp 00:19:21.699 rmmod nvme_fabrics 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91512 ']' 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91512 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91512 ']' 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91512 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91512 00:19:21.699 killing process with pid 91512 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91512' 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91512 00:19:21.699 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91512 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:21.957 14:36:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:22.890 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:22.890 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:22.890 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:22.890 14:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.77N /tmp/spdk.key-null.kYl /tmp/spdk.key-sha256.MFH /tmp/spdk.key-sha384.4iW /tmp/spdk.key-sha512.S4Y /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:22.890 14:36:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:23.149 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:23.149 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:23.149 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:23.149 00:19:23.149 real 0m35.775s 00:19:23.149 user 0m32.114s 00:19:23.149 sys 0m3.522s 00:19:23.149 14:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:23.149 14:36:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.149 ************************************ 00:19:23.149 END TEST nvmf_auth_host 00:19:23.149 ************************************ 00:19:23.407 14:36:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:23.407 14:36:02 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:19:23.407 14:36:02 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:23.407 14:36:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:23.407 14:36:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:23.407 14:36:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:23.407 ************************************ 00:19:23.407 START TEST nvmf_digest 00:19:23.407 ************************************ 00:19:23.407 14:36:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:23.407 * Looking for test storage... 00:19:23.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:23.407 14:36:02 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:23.407 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:23.407 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.407 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.407 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.407 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.407 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.407 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.407 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.407 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.407 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:23.408 Cannot find device "nvmf_tgt_br" 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:23.408 Cannot find device "nvmf_tgt_br2" 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:23.408 Cannot find device "nvmf_tgt_br" 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:23.408 Cannot find device "nvmf_tgt_br2" 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:19:23.408 14:36:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:23.408 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:23.666 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:23.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:23.666 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:23.666 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:23.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:23.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:19:23.667 00:19:23.667 --- 10.0.0.2 ping statistics --- 00:19:23.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.667 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:23.667 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:23.667 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:19:23.667 00:19:23.667 --- 10.0.0.3 ping statistics --- 00:19:23.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.667 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:23.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:23.667 00:19:23.667 --- 10.0.0.1 ping statistics --- 00:19:23.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.667 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:23.667 14:36:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:23.925 ************************************ 00:19:23.925 START TEST nvmf_digest_clean 00:19:23.925 ************************************ 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=93104 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 93104 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93104 ']' 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.925 14:36:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:23.925 [2024-07-15 14:36:03.336761] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:23.925 [2024-07-15 14:36:03.336862] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.925 [2024-07-15 14:36:03.478226] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.183 [2024-07-15 14:36:03.545058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.183 [2024-07-15 14:36:03.545111] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.183 [2024-07-15 14:36:03.545123] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.183 [2024-07-15 14:36:03.545134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.183 [2024-07-15 14:36:03.545143] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.183 [2024-07-15 14:36:03.545177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.749 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.749 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:24.749 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:24.749 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:24.749 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:24.749 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.749 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:24.749 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:24.749 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:24.749 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.749 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:25.007 null0 00:19:25.007 [2024-07-15 14:36:04.413227] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.007 [2024-07-15 14:36:04.437346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93154 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93154 /var/tmp/bperf.sock 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93154 ']' 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.007 14:36:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:25.007 [2024-07-15 14:36:04.502731] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:25.007 [2024-07-15 14:36:04.503090] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93154 ] 00:19:25.265 [2024-07-15 14:36:04.641873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.265 [2024-07-15 14:36:04.742386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.200 14:36:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.200 14:36:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:26.200 14:36:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:26.200 14:36:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:26.200 14:36:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:26.458 14:36:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:26.458 14:36:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:26.717 nvme0n1 00:19:26.717 14:36:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:26.717 14:36:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:26.717 Running I/O for 2 seconds... 00:19:29.244 00:19:29.244 Latency(us) 00:19:29.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.244 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:29.244 nvme0n1 : 2.01 18382.66 71.81 0.00 0.00 6955.06 3693.85 18350.08 00:19:29.245 =================================================================================================================== 00:19:29.245 Total : 18382.66 71.81 0.00 0.00 6955.06 3693.85 18350.08 00:19:29.245 0 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:29.245 | select(.opcode=="crc32c") 00:19:29.245 | "\(.module_name) \(.executed)"' 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93154 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93154 ']' 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93154 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93154 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93154' 00:19:29.245 killing process with pid 93154 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93154 00:19:29.245 Received shutdown signal, test time was about 2.000000 seconds 00:19:29.245 00:19:29.245 Latency(us) 00:19:29.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.245 =================================================================================================================== 00:19:29.245 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93154 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:29.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93243 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93243 /var/tmp/bperf.sock 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93243 ']' 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.245 14:36:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:29.245 [2024-07-15 14:36:08.807113] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:29.245 [2024-07-15 14:36:08.807393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:19:29.245 Zero copy mechanism will not be used. 00:19:29.245 llocations --file-prefix=spdk_pid93243 ] 00:19:29.503 [2024-07-15 14:36:08.947466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.503 [2024-07-15 14:36:09.020749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.436 14:36:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.436 14:36:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:30.436 14:36:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:30.436 14:36:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:30.436 14:36:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:30.695 14:36:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:30.695 14:36:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:30.953 nvme0n1 00:19:30.953 14:36:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:30.953 14:36:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:30.953 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:30.953 Zero copy mechanism will not be used. 00:19:30.953 Running I/O for 2 seconds... 00:19:33.482 00:19:33.482 Latency(us) 00:19:33.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.482 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:33.482 nvme0n1 : 2.00 8052.37 1006.55 0.00 0.00 1983.06 655.36 8400.52 00:19:33.482 =================================================================================================================== 00:19:33.482 Total : 8052.37 1006.55 0.00 0.00 1983.06 655.36 8400.52 00:19:33.482 0 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:33.482 | select(.opcode=="crc32c") 00:19:33.482 | "\(.module_name) \(.executed)"' 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93243 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93243 ']' 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93243 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93243 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:33.482 killing process with pid 93243 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93243' 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93243 00:19:33.482 Received shutdown signal, test time was about 2.000000 seconds 00:19:33.482 00:19:33.482 Latency(us) 00:19:33.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.482 =================================================================================================================== 00:19:33.482 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93243 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93329 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:33.482 14:36:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93329 /var/tmp/bperf.sock 00:19:33.482 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93329 ']' 00:19:33.482 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:33.482 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:33.482 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:33.482 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.483 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:33.483 [2024-07-15 14:36:13.042627] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:33.483 [2024-07-15 14:36:13.042737] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93329 ] 00:19:33.740 [2024-07-15 14:36:13.177217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.740 [2024-07-15 14:36:13.234484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.740 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:33.740 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:33.740 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:33.740 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:33.740 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:34.306 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:34.306 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:34.565 nvme0n1 00:19:34.565 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:34.565 14:36:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:34.565 Running I/O for 2 seconds... 00:19:37.152 00:19:37.152 Latency(us) 00:19:37.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.152 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:37.152 nvme0n1 : 2.01 21422.84 83.68 0.00 0.00 5968.22 2308.65 10902.81 00:19:37.152 =================================================================================================================== 00:19:37.152 Total : 21422.84 83.68 0.00 0.00 5968.22 2308.65 10902.81 00:19:37.152 0 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:37.152 | select(.opcode=="crc32c") 00:19:37.152 | "\(.module_name) \(.executed)"' 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93329 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93329 ']' 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93329 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93329 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:37.152 killing process with pid 93329 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93329' 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93329 00:19:37.152 Received shutdown signal, test time was about 2.000000 seconds 00:19:37.152 00:19:37.152 Latency(us) 00:19:37.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.152 =================================================================================================================== 00:19:37.152 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93329 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:37.152 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:37.153 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93406 00:19:37.153 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93406 /var/tmp/bperf.sock 00:19:37.153 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93406 ']' 00:19:37.153 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:37.153 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:37.153 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:37.153 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.153 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:37.153 [2024-07-15 14:36:16.649782] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:37.153 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:37.153 Zero copy mechanism will not be used. 00:19:37.153 [2024-07-15 14:36:16.649867] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93406 ] 00:19:37.411 [2024-07-15 14:36:16.785240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.411 [2024-07-15 14:36:16.843377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.411 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.411 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:37.411 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:37.411 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:37.411 14:36:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:37.668 14:36:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:37.668 14:36:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:37.925 nvme0n1 00:19:37.925 14:36:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:37.925 14:36:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:38.182 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:38.182 Zero copy mechanism will not be used. 00:19:38.182 Running I/O for 2 seconds... 00:19:40.100 00:19:40.100 Latency(us) 00:19:40.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.100 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:40.100 nvme0n1 : 2.00 6287.13 785.89 0.00 0.00 2538.70 1578.82 4230.05 00:19:40.100 =================================================================================================================== 00:19:40.100 Total : 6287.13 785.89 0.00 0.00 2538.70 1578.82 4230.05 00:19:40.100 0 00:19:40.100 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:40.100 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:40.100 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:40.100 | select(.opcode=="crc32c") 00:19:40.100 | "\(.module_name) \(.executed)"' 00:19:40.100 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:40.100 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93406 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93406 ']' 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93406 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93406 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:40.358 killing process with pid 93406 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93406' 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93406 00:19:40.358 Received shutdown signal, test time was about 2.000000 seconds 00:19:40.358 00:19:40.358 Latency(us) 00:19:40.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.358 =================================================================================================================== 00:19:40.358 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.358 14:36:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93406 00:19:40.615 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93104 00:19:40.615 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93104 ']' 00:19:40.615 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93104 00:19:40.615 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:40.615 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.615 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93104 00:19:40.615 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:40.615 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:40.615 killing process with pid 93104 00:19:40.615 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93104' 00:19:40.615 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93104 00:19:40.615 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93104 00:19:40.873 00:19:40.873 real 0m16.946s 00:19:40.873 user 0m32.577s 00:19:40.873 sys 0m4.151s 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:40.873 ************************************ 00:19:40.873 END TEST nvmf_digest_clean 00:19:40.873 ************************************ 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:40.873 ************************************ 00:19:40.873 START TEST nvmf_digest_error 00:19:40.873 ************************************ 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93506 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93506 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93506 ']' 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.873 14:36:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:40.873 [2024-07-15 14:36:20.327683] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:40.873 [2024-07-15 14:36:20.327801] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.873 [2024-07-15 14:36:20.461660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.131 [2024-07-15 14:36:20.520406] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.131 [2024-07-15 14:36:20.520472] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.131 [2024-07-15 14:36:20.520499] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.131 [2024-07-15 14:36:20.520507] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.131 [2024-07-15 14:36:20.520514] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.131 [2024-07-15 14:36:20.520541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.697 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.697 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:41.697 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.697 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.697 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:41.956 [2024-07-15 14:36:21.301055] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:41.956 null0 00:19:41.956 [2024-07-15 14:36:21.371101] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.956 [2024-07-15 14:36:21.395244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93550 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93550 /var/tmp/bperf.sock 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93550 ']' 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.956 14:36:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:41.956 [2024-07-15 14:36:21.459031] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:41.956 [2024-07-15 14:36:21.459157] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93550 ] 00:19:42.213 [2024-07-15 14:36:21.599020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.213 [2024-07-15 14:36:21.660889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.148 14:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.148 14:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:43.148 14:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:43.148 14:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:43.148 14:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:43.148 14:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.148 14:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:43.148 14:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.148 14:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:43.148 14:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:43.405 nvme0n1 00:19:43.405 14:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:43.406 14:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.406 14:36:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:43.664 14:36:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.664 14:36:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:43.664 14:36:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:43.664 Running I/O for 2 seconds... 00:19:43.664 [2024-07-15 14:36:23.120809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.664 [2024-07-15 14:36:23.120883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.664 [2024-07-15 14:36:23.120900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.664 [2024-07-15 14:36:23.133811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.664 [2024-07-15 14:36:23.133871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.664 [2024-07-15 14:36:23.133886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.664 [2024-07-15 14:36:23.148566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.664 [2024-07-15 14:36:23.148626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.664 [2024-07-15 14:36:23.148642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.664 [2024-07-15 14:36:23.163904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.664 [2024-07-15 14:36:23.163963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.664 [2024-07-15 14:36:23.163977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.664 [2024-07-15 14:36:23.177515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.664 [2024-07-15 14:36:23.177573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.664 [2024-07-15 14:36:23.177588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.664 [2024-07-15 14:36:23.188346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.664 [2024-07-15 14:36:23.188387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.664 [2024-07-15 14:36:23.188402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.664 [2024-07-15 14:36:23.201525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.664 [2024-07-15 14:36:23.201581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.664 [2024-07-15 14:36:23.201594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.664 [2024-07-15 14:36:23.217837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.664 [2024-07-15 14:36:23.217876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.664 [2024-07-15 14:36:23.217906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.664 [2024-07-15 14:36:23.233763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.664 [2024-07-15 14:36:23.233805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.664 [2024-07-15 14:36:23.233820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.664 [2024-07-15 14:36:23.248615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.664 [2024-07-15 14:36:23.248672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.664 [2024-07-15 14:36:23.248686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.261905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.261944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.261968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.275894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.275933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.275948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.288834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.288874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.288888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.302929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.302977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.302996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.317757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.317799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.317813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.329424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.329478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.329493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.344660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.344729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.344744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.358837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.358890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.358904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.371232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.371285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.371299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.385730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.385783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.385798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.400527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.400581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.400595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.412144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.412183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.412197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.427783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.427824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.427838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.442344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.442384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.442398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.456270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.456326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.922 [2024-07-15 14:36:23.456358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.922 [2024-07-15 14:36:23.470942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.922 [2024-07-15 14:36:23.470999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.923 [2024-07-15 14:36:23.471030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.923 [2024-07-15 14:36:23.484106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.923 [2024-07-15 14:36:23.484163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.923 [2024-07-15 14:36:23.484193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.923 [2024-07-15 14:36:23.498149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.923 [2024-07-15 14:36:23.498205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.923 [2024-07-15 14:36:23.498235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:43.923 [2024-07-15 14:36:23.509837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:43.923 [2024-07-15 14:36:23.509893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.923 [2024-07-15 14:36:23.509923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.180 [2024-07-15 14:36:23.524410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.180 [2024-07-15 14:36:23.524468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.180 [2024-07-15 14:36:23.524499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.180 [2024-07-15 14:36:23.538896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.180 [2024-07-15 14:36:23.538955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.180 [2024-07-15 14:36:23.538985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.180 [2024-07-15 14:36:23.552582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.552641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.552655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.564762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.564818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.564848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.578256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.578345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.578368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.591512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.591571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.591602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.606339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.606381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.606396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.620280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.620354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.620384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.631123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.631180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.631211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.645265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.645311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.645325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.660012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.660053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.660068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.673772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.673838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.673868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.687412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.687467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.687498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.699898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.699953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.699983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.716267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.716340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.716354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.728195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.728267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.728298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.741776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.741830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.741860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.755893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.755947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.755977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.181 [2024-07-15 14:36:23.769555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.181 [2024-07-15 14:36:23.769620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.181 [2024-07-15 14:36:23.769650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.783040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.783095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.783126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.797682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.797752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.797767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.809638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.809681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.809708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.825529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.825586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.825617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.839627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.839684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.839724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.853098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.853138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.853167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.866894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.866948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.866977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.878437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.878478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.878492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.892746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.892814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.892829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.907005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.907060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.907090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.920097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.920153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.920184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.933475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.933534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.933564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.947028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.947070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.947085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.959680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.959752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.959767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.972749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.972790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.972804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:23.986323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:23.986367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:23.986382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:24.000585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:24.000627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:24.000641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:24.015417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:24.015473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:24.015487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.440 [2024-07-15 14:36:24.030975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.440 [2024-07-15 14:36:24.031033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.440 [2024-07-15 14:36:24.031047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.044682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.044750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.044765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.059135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.059192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.059207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.074185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.074242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.074257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.087009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.087065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.087079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.100490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.100548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.100562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.115724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.115780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.115794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.129889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.129932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.129947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.144334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.144392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.144406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.158602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.158660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.158690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.170613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.170654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.170668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.182528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.182569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.182584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.198315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.198357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.198371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.213162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.213222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.213236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.227880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.227925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.227940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.241086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.241145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.241159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.254457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.254499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.254514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.268563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.268621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.268635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.699 [2024-07-15 14:36:24.284613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.699 [2024-07-15 14:36:24.284670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.699 [2024-07-15 14:36:24.284685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.958 [2024-07-15 14:36:24.297318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.958 [2024-07-15 14:36:24.297361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.958 [2024-07-15 14:36:24.297376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.958 [2024-07-15 14:36:24.309465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.958 [2024-07-15 14:36:24.309506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.958 [2024-07-15 14:36:24.309520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.958 [2024-07-15 14:36:24.325807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.958 [2024-07-15 14:36:24.325851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.958 [2024-07-15 14:36:24.325866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.958 [2024-07-15 14:36:24.338318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.958 [2024-07-15 14:36:24.338360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.958 [2024-07-15 14:36:24.338374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.958 [2024-07-15 14:36:24.352969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.958 [2024-07-15 14:36:24.353013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.958 [2024-07-15 14:36:24.353027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.958 [2024-07-15 14:36:24.366940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.958 [2024-07-15 14:36:24.366996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.958 [2024-07-15 14:36:24.367011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.958 [2024-07-15 14:36:24.381761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.958 [2024-07-15 14:36:24.381801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.958 [2024-07-15 14:36:24.381816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.958 [2024-07-15 14:36:24.395232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.959 [2024-07-15 14:36:24.395291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.959 [2024-07-15 14:36:24.395306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.959 [2024-07-15 14:36:24.410173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.959 [2024-07-15 14:36:24.410228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.959 [2024-07-15 14:36:24.410258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.959 [2024-07-15 14:36:24.422771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.959 [2024-07-15 14:36:24.422828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.959 [2024-07-15 14:36:24.422842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.959 [2024-07-15 14:36:24.436640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.959 [2024-07-15 14:36:24.436725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.959 [2024-07-15 14:36:24.436741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.959 [2024-07-15 14:36:24.451482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.959 [2024-07-15 14:36:24.451540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.959 [2024-07-15 14:36:24.451555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.959 [2024-07-15 14:36:24.465423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.959 [2024-07-15 14:36:24.465464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.959 [2024-07-15 14:36:24.465478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.959 [2024-07-15 14:36:24.480232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.959 [2024-07-15 14:36:24.480289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.959 [2024-07-15 14:36:24.480303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.959 [2024-07-15 14:36:24.495967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.959 [2024-07-15 14:36:24.496006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.959 [2024-07-15 14:36:24.496021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.959 [2024-07-15 14:36:24.508765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.959 [2024-07-15 14:36:24.508820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.959 [2024-07-15 14:36:24.508834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.959 [2024-07-15 14:36:24.522198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.959 [2024-07-15 14:36:24.522254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.959 [2024-07-15 14:36:24.522268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.959 [2024-07-15 14:36:24.538410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.959 [2024-07-15 14:36:24.538466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.959 [2024-07-15 14:36:24.538480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:44.959 [2024-07-15 14:36:24.550543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:44.959 [2024-07-15 14:36:24.550583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.959 [2024-07-15 14:36:24.550597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.218 [2024-07-15 14:36:24.567953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.218 [2024-07-15 14:36:24.568015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.218 [2024-07-15 14:36:24.568029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.218 [2024-07-15 14:36:24.580104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.218 [2024-07-15 14:36:24.580145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.218 [2024-07-15 14:36:24.580158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.218 [2024-07-15 14:36:24.595200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.218 [2024-07-15 14:36:24.595256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.218 [2024-07-15 14:36:24.595270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.218 [2024-07-15 14:36:24.610074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.218 [2024-07-15 14:36:24.610115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.218 [2024-07-15 14:36:24.610129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.218 [2024-07-15 14:36:24.625363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.218 [2024-07-15 14:36:24.625402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.218 [2024-07-15 14:36:24.625416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.218 [2024-07-15 14:36:24.639340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.218 [2024-07-15 14:36:24.639380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.218 [2024-07-15 14:36:24.639394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.218 [2024-07-15 14:36:24.654085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.218 [2024-07-15 14:36:24.654138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.218 [2024-07-15 14:36:24.654153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.218 [2024-07-15 14:36:24.666422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.218 [2024-07-15 14:36:24.666464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.218 [2024-07-15 14:36:24.666478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.218 [2024-07-15 14:36:24.681088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.218 [2024-07-15 14:36:24.681133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.218 [2024-07-15 14:36:24.681148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.218 [2024-07-15 14:36:24.695618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.218 [2024-07-15 14:36:24.695675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.218 [2024-07-15 14:36:24.695689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.218 [2024-07-15 14:36:24.710890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.218 [2024-07-15 14:36:24.710933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.218 [2024-07-15 14:36:24.710947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.218 [2024-07-15 14:36:24.723567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.219 [2024-07-15 14:36:24.723623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.219 [2024-07-15 14:36:24.723654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.219 [2024-07-15 14:36:24.737883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.219 [2024-07-15 14:36:24.737941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.219 [2024-07-15 14:36:24.737955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.219 [2024-07-15 14:36:24.751935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.219 [2024-07-15 14:36:24.751977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.219 [2024-07-15 14:36:24.751991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.219 [2024-07-15 14:36:24.766360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.219 [2024-07-15 14:36:24.766401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.219 [2024-07-15 14:36:24.766415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.219 [2024-07-15 14:36:24.780161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.219 [2024-07-15 14:36:24.780226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.219 [2024-07-15 14:36:24.780258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.219 [2024-07-15 14:36:24.794387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.219 [2024-07-15 14:36:24.794429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.219 [2024-07-15 14:36:24.794443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.219 [2024-07-15 14:36:24.806426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.219 [2024-07-15 14:36:24.806467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.219 [2024-07-15 14:36:24.806482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:24.820803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:24.820845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.477 [2024-07-15 14:36:24.820859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:24.833473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:24.833515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.477 [2024-07-15 14:36:24.833530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:24.850003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:24.850059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.477 [2024-07-15 14:36:24.850090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:24.863876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:24.863918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.477 [2024-07-15 14:36:24.863933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:24.877899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:24.877955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.477 [2024-07-15 14:36:24.877985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:24.892134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:24.892191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.477 [2024-07-15 14:36:24.892221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:24.903546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:24.903604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.477 [2024-07-15 14:36:24.903618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:24.918943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:24.918999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.477 [2024-07-15 14:36:24.919013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:24.934200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:24.934257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.477 [2024-07-15 14:36:24.934296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:24.948363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:24.948419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.477 [2024-07-15 14:36:24.948449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:24.962816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:24.962872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.477 [2024-07-15 14:36:24.962902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:24.975889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:24.975929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.477 [2024-07-15 14:36:24.975943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:24.990746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:24.990788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.477 [2024-07-15 14:36:24.990802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.477 [2024-07-15 14:36:25.005050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.477 [2024-07-15 14:36:25.005093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.478 [2024-07-15 14:36:25.005108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.478 [2024-07-15 14:36:25.019988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.478 [2024-07-15 14:36:25.020030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.478 [2024-07-15 14:36:25.020045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.478 [2024-07-15 14:36:25.032102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.478 [2024-07-15 14:36:25.032174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.478 [2024-07-15 14:36:25.032188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.478 [2024-07-15 14:36:25.046450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.478 [2024-07-15 14:36:25.046491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.478 [2024-07-15 14:36:25.046506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.478 [2024-07-15 14:36:25.061256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.478 [2024-07-15 14:36:25.061319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.478 [2024-07-15 14:36:25.061333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.735 [2024-07-15 14:36:25.075387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.735 [2024-07-15 14:36:25.075449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.735 [2024-07-15 14:36:25.075463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.735 [2024-07-15 14:36:25.089427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.735 [2024-07-15 14:36:25.089470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.735 [2024-07-15 14:36:25.089485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.736 [2024-07-15 14:36:25.103836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23973e0) 00:19:45.736 [2024-07-15 14:36:25.103877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.736 [2024-07-15 14:36:25.103891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:45.736 00:19:45.736 Latency(us) 00:19:45.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.736 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:45.736 nvme0n1 : 2.00 18235.45 71.23 0.00 0.00 7010.90 3842.79 19779.96 00:19:45.736 =================================================================================================================== 00:19:45.736 Total : 18235.45 71.23 0.00 0.00 7010.90 3842.79 19779.96 00:19:45.736 0 00:19:45.736 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:45.736 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:45.736 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:45.736 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:45.736 | .driver_specific 00:19:45.736 | .nvme_error 00:19:45.736 | .status_code 00:19:45.736 | .command_transient_transport_error' 00:19:45.993 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:19:45.993 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93550 00:19:45.993 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93550 ']' 00:19:45.993 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93550 00:19:45.993 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:45.993 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.993 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93550 00:19:45.994 killing process with pid 93550 00:19:45.994 Received shutdown signal, test time was about 2.000000 seconds 00:19:45.994 00:19:45.994 Latency(us) 00:19:45.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.994 =================================================================================================================== 00:19:45.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93550' 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93550 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93550 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93635 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93635 /var/tmp/bperf.sock 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93635 ']' 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:45.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.994 14:36:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:46.251 [2024-07-15 14:36:25.626363] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:46.251 [2024-07-15 14:36:25.626642] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93635 ] 00:19:46.252 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:46.252 Zero copy mechanism will not be used. 00:19:46.252 [2024-07-15 14:36:25.760436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.252 [2024-07-15 14:36:25.819760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.186 14:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:47.186 14:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:47.186 14:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:47.186 14:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:47.444 14:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:47.444 14:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.444 14:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:47.444 14:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.444 14:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:47.444 14:36:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:47.710 nvme0n1 00:19:47.710 14:36:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:47.710 14:36:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.710 14:36:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:47.710 14:36:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.710 14:36:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:47.710 14:36:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:47.970 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:47.970 Zero copy mechanism will not be used. 00:19:47.970 Running I/O for 2 seconds... 00:19:47.970 [2024-07-15 14:36:27.394564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.394625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.394642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.399793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.399839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.399854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.405236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.405280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.405295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.409094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.409150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.409164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.413657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.413742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.413758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.417777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.417832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.417848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.421086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.421143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.421157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.425286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.425342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.425357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.429685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.429767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.429799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.432956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.433011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.433025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.437494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.437538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.437553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.443182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.443238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.443252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.446827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.446881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.446912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.451599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.451655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.451687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.455862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.455906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.455921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.460005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.460048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.970 [2024-07-15 14:36:27.460063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.970 [2024-07-15 14:36:27.464412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.970 [2024-07-15 14:36:27.464457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.464472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.468594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.468639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.468654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.472466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.472509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.472534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.477120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.477163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.477178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.480705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.480773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.480788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.485064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.485122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.485152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.489152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.489212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.489227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.493752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.493794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.493808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.497428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.497469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.497483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.501434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.501477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.501492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.505797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.505839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.505854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.509225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.509265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.509279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.513554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.513612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.513626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.518559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.518605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.518619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.521831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.521885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.521900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.527324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.527382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.527411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.530413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.530454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.530468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.535420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.535478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.535507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.539086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.539140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.539168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.543506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.543562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.971 [2024-07-15 14:36:27.543591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:47.971 [2024-07-15 14:36:27.548775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.971 [2024-07-15 14:36:27.548829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.972 [2024-07-15 14:36:27.548875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:47.972 [2024-07-15 14:36:27.553132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.972 [2024-07-15 14:36:27.553187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.972 [2024-07-15 14:36:27.553217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:47.972 [2024-07-15 14:36:27.556588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.972 [2024-07-15 14:36:27.556644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.972 [2024-07-15 14:36:27.556673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:47.972 [2024-07-15 14:36:27.561214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:47.972 [2024-07-15 14:36:27.561272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.972 [2024-07-15 14:36:27.561288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.232 [2024-07-15 14:36:27.566276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.232 [2024-07-15 14:36:27.566343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.232 [2024-07-15 14:36:27.566358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.232 [2024-07-15 14:36:27.571350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.232 [2024-07-15 14:36:27.571407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.232 [2024-07-15 14:36:27.571437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.232 [2024-07-15 14:36:27.574572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.232 [2024-07-15 14:36:27.574611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.232 [2024-07-15 14:36:27.574625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.232 [2024-07-15 14:36:27.578939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.232 [2024-07-15 14:36:27.578993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.232 [2024-07-15 14:36:27.579022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.583677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.583742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.583771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.588265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.588339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.588353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.591751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.591805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.591835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.596517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.596575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.596589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.601130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.601186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.601216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.604464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.604506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.604520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.608437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.608493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.608508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.613176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.613234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.613265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.616535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.616592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.616606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.620921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.620978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.621008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.625058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.625114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.625144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.629599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.629655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.629685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.633633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.633689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.633730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.638411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.638454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.638469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.642016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.642070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.642084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.646092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.646163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.646192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.650215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.650271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.650293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.654884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.654939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.654968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.659025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.659081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.659111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.663522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.663580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.663624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.667479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.667535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.667565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.672309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.672366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.672380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.676701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.676768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.676799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.679937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.679978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.680007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.685326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.685385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.685400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.690810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.690864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.690895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.694593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.694650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.694680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.699062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.233 [2024-07-15 14:36:27.699118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.233 [2024-07-15 14:36:27.699148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.233 [2024-07-15 14:36:27.703544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.703600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.703630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.708224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.708282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.708296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.711592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.711648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.711678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.716101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.716157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.716171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.720517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.720574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.720604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.723614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.723669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.723700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.728492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.728535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.728549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.733166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.733221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.733251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.736572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.736626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.736656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.741151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.741207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.741237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.746276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.746327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.746341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.750848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.750889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.750903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.754237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.754305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.754320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.758520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.758569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.758583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.764049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.764093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.764108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.767456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.767499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.767513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.772123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.772169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.772184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.777524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.777570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.777585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.782354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.782398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.782412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.786565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.786624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.786640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.789607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.789646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.789660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.794098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.794141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.794156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.799296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.799342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.799356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.803887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.803944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.803959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.807360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.807400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.807414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.812152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.812211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.812225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.815892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.234 [2024-07-15 14:36:27.815933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.234 [2024-07-15 14:36:27.815948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.234 [2024-07-15 14:36:27.820457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.235 [2024-07-15 14:36:27.820501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.235 [2024-07-15 14:36:27.820515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.235 [2024-07-15 14:36:27.825836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.235 [2024-07-15 14:36:27.825882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.235 [2024-07-15 14:36:27.825897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.830375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.830418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.830432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.833711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.833751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.833765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.838469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.838521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.838536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.842565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.842611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.842626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.846455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.846499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.846513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.851015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.851090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.851105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.855391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.855435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.855449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.858998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.859040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.859054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.863804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.863863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.863878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.868047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.868105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.868119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.872275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.872316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.872330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.876847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.876890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.876904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.880818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.880860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.880874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.885485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.885530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.494 [2024-07-15 14:36:27.885550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.494 [2024-07-15 14:36:27.890340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.494 [2024-07-15 14:36:27.890397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.890411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.894070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.894111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.894125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.898295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.898340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.898355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.902921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.902964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.902978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.906762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.906803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.906818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.910279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.910329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.910344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.914806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.914848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.914862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.919752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.919808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.919822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.924535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.924577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.924592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.929338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.929394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.929409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.932756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.932811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.932825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.937053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.937111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.937126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.941995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.942054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.942068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.946224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.946266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.946280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.951284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.951326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.951341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.954486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.954529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.954543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.959601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.959661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.959676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.963153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.963209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.963224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.967534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.967593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.967607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.972980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.973037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.973051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.976515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.976573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.976588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.980818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.980874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.980888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.985381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.985422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.985437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.989052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.989108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.989122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.993628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.993688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.993717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:27.999050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:27.999092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:27.999107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:28.003951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:28.004010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:28.004024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:28.007444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:28.007485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:28.007499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:28.011449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:28.011505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.495 [2024-07-15 14:36:28.011536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.495 [2024-07-15 14:36:28.015941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.495 [2024-07-15 14:36:28.015983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.015997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.020090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.020133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.020147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.024424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.024467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.024482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.028760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.028803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.028818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.032833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.032888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.032903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.036921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.036966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.036980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.041522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.041568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.041583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.045746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.045787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.045801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.049850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.049893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.049908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.054394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.054436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.054450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.059059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.059102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.059117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.062591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.062632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.062647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.067314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.067367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.067381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.071200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.071244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.071258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.075680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.075737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.075752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.080333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.080375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.080390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.083496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.083537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.083551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.496 [2024-07-15 14:36:28.088329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.496 [2024-07-15 14:36:28.088372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.496 [2024-07-15 14:36:28.088387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.093523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.093565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.093580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.098344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.098386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.098401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.103680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.103733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.103748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.106862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.106914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.106928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.111332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.111374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.111388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.116934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.116992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.117022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.121981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.122022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.122036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.125821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.125875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.125904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.130179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.130236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.130267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.135363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.135406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.135421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.139978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.140021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.140036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.142828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.142869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.142882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.147750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.147788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.147802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.152514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.152552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.152566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.156240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.156279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.156293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.160819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.160858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.160872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.166101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.166141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.166156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.171363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.171403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.756 [2024-07-15 14:36:28.171417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.756 [2024-07-15 14:36:28.176348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.756 [2024-07-15 14:36:28.176383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.176398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.179543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.179581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.179595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.183844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.183884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.183898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.188844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.188885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.188899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.191886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.191924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.191938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.195931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.195971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.195985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.200225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.200268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.200282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.204237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.204279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.204293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.208605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.208646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.208661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.213046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.213088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.213102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.217105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.217144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.217158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.221123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.221170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.221184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.225112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.225167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.225182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.229622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.229661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.229675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.233892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.233927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.233940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.238425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.238464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.238478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.242232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.242270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.242293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.247012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.247051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.247065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.250693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.250749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.250762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.254527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.254568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.254582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.259313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.259356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.259370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.264142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.264184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.264199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.267429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.267470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.267485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.272868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.272926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.272940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.277634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.277675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.277689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.281178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.281219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.281233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.285848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.285890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.285905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.291419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.291463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.291478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.296919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.296965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.296980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.757 [2024-07-15 14:36:28.300396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.757 [2024-07-15 14:36:28.300438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.757 [2024-07-15 14:36:28.300452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.758 [2024-07-15 14:36:28.305066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.758 [2024-07-15 14:36:28.305110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.758 [2024-07-15 14:36:28.305124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.758 [2024-07-15 14:36:28.309898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.758 [2024-07-15 14:36:28.309942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.758 [2024-07-15 14:36:28.309956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.758 [2024-07-15 14:36:28.315053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.758 [2024-07-15 14:36:28.315095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.758 [2024-07-15 14:36:28.315110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.758 [2024-07-15 14:36:28.319005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.758 [2024-07-15 14:36:28.319044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.758 [2024-07-15 14:36:28.319058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.758 [2024-07-15 14:36:28.322747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.758 [2024-07-15 14:36:28.322787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.758 [2024-07-15 14:36:28.322802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.758 [2024-07-15 14:36:28.327567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.758 [2024-07-15 14:36:28.327610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.758 [2024-07-15 14:36:28.327625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.758 [2024-07-15 14:36:28.330825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.758 [2024-07-15 14:36:28.330868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.758 [2024-07-15 14:36:28.330883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:48.758 [2024-07-15 14:36:28.335395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.758 [2024-07-15 14:36:28.335438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.758 [2024-07-15 14:36:28.335452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:48.758 [2024-07-15 14:36:28.339241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.758 [2024-07-15 14:36:28.339283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.758 [2024-07-15 14:36:28.339297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:48.758 [2024-07-15 14:36:28.343248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.758 [2024-07-15 14:36:28.343291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.758 [2024-07-15 14:36:28.343305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:48.758 [2024-07-15 14:36:28.347767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:48.758 [2024-07-15 14:36:28.347811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.758 [2024-07-15 14:36:28.347826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.018 [2024-07-15 14:36:28.352745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.018 [2024-07-15 14:36:28.352789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.018 [2024-07-15 14:36:28.352803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.018 [2024-07-15 14:36:28.357611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.018 [2024-07-15 14:36:28.357657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.018 [2024-07-15 14:36:28.357672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.018 [2024-07-15 14:36:28.360405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.018 [2024-07-15 14:36:28.360444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.018 [2024-07-15 14:36:28.360459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.018 [2024-07-15 14:36:28.365521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.018 [2024-07-15 14:36:28.365564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.018 [2024-07-15 14:36:28.365578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.018 [2024-07-15 14:36:28.369065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.018 [2024-07-15 14:36:28.369106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.018 [2024-07-15 14:36:28.369121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.018 [2024-07-15 14:36:28.372890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.018 [2024-07-15 14:36:28.372932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.018 [2024-07-15 14:36:28.372945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.018 [2024-07-15 14:36:28.377643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.018 [2024-07-15 14:36:28.377685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.018 [2024-07-15 14:36:28.377715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.381960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.382000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.382014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.385448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.385488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.385503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.390112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.390154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.390168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.394654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.394708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.394724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.398428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.398469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.398484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.402431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.402472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.402486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.407650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.407711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.407727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.411368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.411409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.411424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.415226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.415267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.415282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.419906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.419949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.419963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.424056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.424098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.424112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.427998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.428073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.428087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.432338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.432397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.432411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.436663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.436731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.436746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.440177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.440218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.440232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.445024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.445082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.445096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.448250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.448306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.448336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.453322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.453380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.453395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.457651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.457749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.457780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.461506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.461547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.461562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.466113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.466169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.466198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.470181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.470238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.470267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.474382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.474424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.474437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.478834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.478891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.478920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.482885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.482941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.482972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.486523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.486566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.486580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.490630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.490675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.490689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.495932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.496006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.496021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.019 [2024-07-15 14:36:28.499504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.019 [2024-07-15 14:36:28.499559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.019 [2024-07-15 14:36:28.499589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.503629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.503685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.503722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.507992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.508048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.508078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.512423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.512479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.512509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.516178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.516235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.516264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.520987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.521043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.521072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.525280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.525337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.525351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.530279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.530350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.530364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.533309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.533364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.533393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.538172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.538231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.538261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.542918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.542977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.542991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.547185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.547258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.547274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.551048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.551090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.551110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.555706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.555761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.555776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.559772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.559842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.559868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.563199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.563256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.563271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.567716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.567783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.567814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.572320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.572360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.572374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.576365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.576419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.576449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.580855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.580894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.580908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.584697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.584765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.584795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.589431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.589489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.589518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.593781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.593837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.593867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.597409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.597467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.597512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.601051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.601109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.601123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.605400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.605459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.605474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.020 [2024-07-15 14:36:28.609994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.020 [2024-07-15 14:36:28.610037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.020 [2024-07-15 14:36:28.610051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.281 [2024-07-15 14:36:28.614093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.281 [2024-07-15 14:36:28.614136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.281 [2024-07-15 14:36:28.614150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.281 [2024-07-15 14:36:28.618857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.281 [2024-07-15 14:36:28.618901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.281 [2024-07-15 14:36:28.618915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.281 [2024-07-15 14:36:28.622604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.281 [2024-07-15 14:36:28.622676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.281 [2024-07-15 14:36:28.622690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.281 [2024-07-15 14:36:28.626897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.281 [2024-07-15 14:36:28.626953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.281 [2024-07-15 14:36:28.626978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.281 [2024-07-15 14:36:28.631145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.281 [2024-07-15 14:36:28.631203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.281 [2024-07-15 14:36:28.631217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.281 [2024-07-15 14:36:28.634718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.281 [2024-07-15 14:36:28.634799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.281 [2024-07-15 14:36:28.634815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.281 [2024-07-15 14:36:28.639652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.281 [2024-07-15 14:36:28.639706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.281 [2024-07-15 14:36:28.639723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.281 [2024-07-15 14:36:28.645179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.281 [2024-07-15 14:36:28.645234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.281 [2024-07-15 14:36:28.645264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.281 [2024-07-15 14:36:28.648479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.281 [2024-07-15 14:36:28.648532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.281 [2024-07-15 14:36:28.648561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.281 [2024-07-15 14:36:28.653018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.281 [2024-07-15 14:36:28.653074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.281 [2024-07-15 14:36:28.653103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.281 [2024-07-15 14:36:28.657672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.281 [2024-07-15 14:36:28.657756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.281 [2024-07-15 14:36:28.657789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.662278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.662346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.662360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.665362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.665416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.665445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.669528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.669583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.669612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.674565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.674608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.674622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.679837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.679894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.679923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.683374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.683430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.683459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.688035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.688091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.688121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.693231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.693288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.693318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.698510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.698553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.698568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.703552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.703626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.703640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.706414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.706453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.706468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.711538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.711596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.711627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.716285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.716341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.716370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.719640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.719724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.719742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.724118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.724175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.724205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.729339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.729396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.729411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.734400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.734442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.734456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.737253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.737310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.737324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.742482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.742523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.742538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.745980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.746035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.746049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.750753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.750807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.750836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.755693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.755760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.755791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.760600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.760642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.760656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.763953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.764008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.764037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.768743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.768811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.768840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.772199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.772255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.772269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.776965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.777022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.777037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.782277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.782347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.782362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.282 [2024-07-15 14:36:28.787422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.282 [2024-07-15 14:36:28.787463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.282 [2024-07-15 14:36:28.787478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.792179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.792235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.792249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.795111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.795179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.795208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.799922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.799977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.800006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.804588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.804644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.804673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.809487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.809541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.809569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.812865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.812920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.812935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.817856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.817914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.817928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.822408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.822447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.822461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.826395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.826435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.826450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.830022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.830076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.830091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.834610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.834696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.834737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.839214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.839268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.839298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.843847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.843908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.843938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.847379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.847433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.847461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.851858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.851912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.851942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.856795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.856850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.856879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.860112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.860165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.860194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.864199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.864254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.864283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.868559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.868612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.868640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.283 [2024-07-15 14:36:28.871776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.283 [2024-07-15 14:36:28.871827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.283 [2024-07-15 14:36:28.871841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.876444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.876485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.876499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.880509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.880565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.880580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.884751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.884852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.884882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.889257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.889314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.889329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.892902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.892943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.892957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.897457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.897502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.897532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.901756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.901796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.901811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.905572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.905629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.905643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.910482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.910523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.910537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.914091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.914131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.914145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.918804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.918857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.918887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.922227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.922268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.922283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.927087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.927133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.927147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.931692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.931758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.931788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.935796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.935848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.935878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.940611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.940654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.940669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.944225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.944280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.944310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.948379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.948434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.948463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.543 [2024-07-15 14:36:28.952470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.543 [2024-07-15 14:36:28.952525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.543 [2024-07-15 14:36:28.952555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:28.956535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:28.956589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:28.956619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:28.960767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:28.960832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:28.960862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:28.965081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:28.965126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:28.965140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:28.969324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:28.969366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:28.969380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:28.973913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:28.973967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:28.973996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:28.978412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:28.978452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:28.978465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:28.982658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:28.982739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:28.982754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:28.986554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:28.986594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:28.986609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:28.990415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:28.990457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:28.990472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:28.994179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:28.994233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:28.994261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:28.998473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:28.998532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:28.998546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:29.002694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:29.002826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:29.002857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:29.007649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:29.007730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:29.007746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:29.011590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:29.011644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:29.011672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:29.015557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:29.015612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:29.015640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:29.019776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:29.019830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:29.019860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:29.023572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:29.023625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:29.023654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:29.028269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:29.028325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:29.028340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:29.031948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:29.031988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:29.032017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.544 [2024-07-15 14:36:29.036173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.544 [2024-07-15 14:36:29.036230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.544 [2024-07-15 14:36:29.036260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.041217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.041262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.041277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.046154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.046195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.046210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.049012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.049067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.049081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.054073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.054113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.054128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.057805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.057859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.057873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.062060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.062101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.062115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.066238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.066303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.066319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.071359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.071417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.071433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.074278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.074336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.074351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.078724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.078764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.078778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.083523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.083565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.083579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.087373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.087412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.087426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.091715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.091781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.091795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.097100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.097141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.097156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.101941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.101995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.102009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.105285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.105322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.105335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.109948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.110003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.110033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.115109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.115150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.115165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.118339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.118380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.118393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.122650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.122747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.545 [2024-07-15 14:36:29.122765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.545 [2024-07-15 14:36:29.127224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.545 [2024-07-15 14:36:29.127281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.546 [2024-07-15 14:36:29.127295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.546 [2024-07-15 14:36:29.131656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.546 [2024-07-15 14:36:29.131737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.546 [2024-07-15 14:36:29.131753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.546 [2024-07-15 14:36:29.134680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.546 [2024-07-15 14:36:29.134758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.546 [2024-07-15 14:36:29.134789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.138865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.138919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.138949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.142992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.143047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.143061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.147230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.147270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.147284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.151519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.151575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.151606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.156016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.156056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.156070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.160215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.160255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.160270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.163537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.163577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.163592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.168217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.168259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.168273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.173503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.173545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.173559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.178656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.178727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.178743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.182122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.182167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.182181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.186248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.186296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.186311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.190435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.190477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.190493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.193912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.193956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.193971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.198542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.198584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.198598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.203981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.204074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.806 [2024-07-15 14:36:29.204090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.806 [2024-07-15 14:36:29.208196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.806 [2024-07-15 14:36:29.208269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.208285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.212724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.212772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.212787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.217210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.217251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.217266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.221912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.221971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.222001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.226511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.226583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.226599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.231152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.231212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.231227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.235594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.235654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.235669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.239804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.239867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.239898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.243903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.243970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.244001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.248081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.248139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.248153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.253114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.253174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.253205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.256110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.256153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.256168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.261395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.261460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.261475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.265246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.265303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.265330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.268852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.268911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.268926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.273603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.273659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.273690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.277507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.277561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.277592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.281834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.281903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.281932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.286109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.286157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.286171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.289997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.290073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.290087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.807 [2024-07-15 14:36:29.294874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.807 [2024-07-15 14:36:29.294928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.807 [2024-07-15 14:36:29.294957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.299159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.299213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.299243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.302079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.302131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.302161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.307178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.307233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.307264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.312054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.312108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.312137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.316479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.316531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.316560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.319854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.319907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.319921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.324433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.324497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.324527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.329342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.329398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.329428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.333344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.333396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.333426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.337129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.337182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.337210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.340880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.340935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.340949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.344639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.344692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.344733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.348936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.348991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.349005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.352311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.352364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.352393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.356726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.356777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.356807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.361246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.361297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.361312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.366116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.366184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.366213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.370003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.370063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.370093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.374693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.374769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.374785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.379213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.808 [2024-07-15 14:36:29.379253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.808 [2024-07-15 14:36:29.379267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:49.808 [2024-07-15 14:36:29.382850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.809 [2024-07-15 14:36:29.382889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.809 [2024-07-15 14:36:29.382902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:49.809 [2024-07-15 14:36:29.387168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc2e380) 00:19:49.809 [2024-07-15 14:36:29.387208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.809 [2024-07-15 14:36:29.387222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:49.809 00:19:49.809 Latency(us) 00:19:49.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.809 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:49.809 nvme0n1 : 2.00 7188.19 898.52 0.00 0.00 2221.61 636.74 6136.55 00:19:49.809 =================================================================================================================== 00:19:49.809 Total : 7188.19 898.52 0.00 0.00 2221.61 636.74 6136.55 00:19:49.809 0 00:19:50.067 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:50.067 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:50.067 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:50.067 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:50.067 | .driver_specific 00:19:50.067 | .nvme_error 00:19:50.067 | .status_code 00:19:50.067 | .command_transient_transport_error' 00:19:50.067 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 464 > 0 )) 00:19:50.067 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93635 00:19:50.067 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93635 ']' 00:19:50.067 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93635 00:19:50.067 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:50.067 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.067 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93635 00:19:50.325 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:50.326 killing process with pid 93635 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93635' 00:19:50.326 Received shutdown signal, test time was about 2.000000 seconds 00:19:50.326 00:19:50.326 Latency(us) 00:19:50.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.326 =================================================================================================================== 00:19:50.326 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93635 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93635 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93725 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93725 /var/tmp/bperf.sock 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93725 ']' 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.326 14:36:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:50.326 [2024-07-15 14:36:29.889497] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:50.326 [2024-07-15 14:36:29.889614] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93725 ] 00:19:50.584 [2024-07-15 14:36:30.022673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.584 [2024-07-15 14:36:30.080507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.584 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.584 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:50.584 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:50.584 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:50.842 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:50.842 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.842 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:50.842 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.842 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:50.842 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:51.407 nvme0n1 00:19:51.407 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:51.407 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.407 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:51.407 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.407 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:51.407 14:36:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:51.407 Running I/O for 2 seconds... 00:19:51.407 [2024-07-15 14:36:30.914505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f6458 00:19:51.407 [2024-07-15 14:36:30.915584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.407 [2024-07-15 14:36:30.915636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:51.407 [2024-07-15 14:36:30.929307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e5658 00:19:51.407 [2024-07-15 14:36:30.931048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.407 [2024-07-15 14:36:30.931084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:51.407 [2024-07-15 14:36:30.937850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e0a68 00:19:51.407 [2024-07-15 14:36:30.938643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.407 [2024-07-15 14:36:30.938676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:51.407 [2024-07-15 14:36:30.950319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f9b30 00:19:51.407 [2024-07-15 14:36:30.951280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.407 [2024-07-15 14:36:30.951313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:51.407 [2024-07-15 14:36:30.964904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e23b8 00:19:51.407 [2024-07-15 14:36:30.966560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.407 [2024-07-15 14:36:30.966600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:51.407 [2024-07-15 14:36:30.976343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ec408 00:19:51.407 [2024-07-15 14:36:30.977789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.407 [2024-07-15 14:36:30.977838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:51.407 [2024-07-15 14:36:30.987854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e5a90 00:19:51.407 [2024-07-15 14:36:30.989235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.407 [2024-07-15 14:36:30.989266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:51.407 [2024-07-15 14:36:31.002283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190de038 00:19:51.666 [2024-07-15 14:36:31.004284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.004315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.010886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f7970 00:19:51.666 [2024-07-15 14:36:31.011972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.012019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.025531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f2948 00:19:51.666 [2024-07-15 14:36:31.027312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.027359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.034033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eb328 00:19:51.666 [2024-07-15 14:36:31.034759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.034805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.048344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e7c50 00:19:51.666 [2024-07-15 14:36:31.049764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.049795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.059573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fdeb0 00:19:51.666 [2024-07-15 14:36:31.060756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.060786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.071314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f57b0 00:19:51.666 [2024-07-15 14:36:31.072400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.072431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.085816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f0788 00:19:51.666 [2024-07-15 14:36:31.087569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.087616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.094311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ed4e8 00:19:51.666 [2024-07-15 14:36:31.095108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.095139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.108626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e95a0 00:19:51.666 [2024-07-15 14:36:31.110091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.110122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.120533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fac10 00:19:51.666 [2024-07-15 14:36:31.121534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.121566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.132066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e3060 00:19:51.666 [2024-07-15 14:36:31.132960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.132992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.142955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e5a90 00:19:51.666 [2024-07-15 14:36:31.143990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.144037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.157246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e3060 00:19:51.666 [2024-07-15 14:36:31.158959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.159006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.165889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f7100 00:19:51.666 [2024-07-15 14:36:31.166594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.166626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.179879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f5be8 00:19:51.666 [2024-07-15 14:36:31.181287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.181334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.191785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e5220 00:19:51.666 [2024-07-15 14:36:31.192705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.192748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.203279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f9f68 00:19:51.666 [2024-07-15 14:36:31.204131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.204164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.213945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e1f80 00:19:51.666 [2024-07-15 14:36:31.214903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.214934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.227960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e5658 00:19:51.666 [2024-07-15 14:36:31.229409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.229474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.239116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190de038 00:19:51.666 [2024-07-15 14:36:31.240376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.240424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:51.666 [2024-07-15 14:36:31.250841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ec840 00:19:51.666 [2024-07-15 14:36:31.251974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.666 [2024-07-15 14:36:31.252021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.262346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e88f8 00:19:51.925 [2024-07-15 14:36:31.263302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.263334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.273831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f81e0 00:19:51.925 [2024-07-15 14:36:31.274659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.274691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.287011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e99d8 00:19:51.925 [2024-07-15 14:36:31.288305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.288351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.298180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eaef0 00:19:51.925 [2024-07-15 14:36:31.299342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.299389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.309361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e8d30 00:19:51.925 [2024-07-15 14:36:31.310356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.310389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.320361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f0ff8 00:19:51.925 [2024-07-15 14:36:31.321201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.321262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.334373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eee38 00:19:51.925 [2024-07-15 14:36:31.335469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.335530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.345155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f31b8 00:19:51.925 [2024-07-15 14:36:31.346418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.346451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.357143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eff18 00:19:51.925 [2024-07-15 14:36:31.357808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.357841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.369314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e5a90 00:19:51.925 [2024-07-15 14:36:31.370344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.370377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.380223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e7c50 00:19:51.925 [2024-07-15 14:36:31.381095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.381126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.393719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e4578 00:19:51.925 [2024-07-15 14:36:31.394776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.394822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.405420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e01f8 00:19:51.925 [2024-07-15 14:36:31.406306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.406339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.415380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eaef0 00:19:51.925 [2024-07-15 14:36:31.416460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.416505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.429108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f5378 00:19:51.925 [2024-07-15 14:36:31.430871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.430917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.439283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f46d0 00:19:51.925 [2024-07-15 14:36:31.441383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.441432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.452976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e1f80 00:19:51.925 [2024-07-15 14:36:31.454486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.454519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.464577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e1f80 00:19:51.925 [2024-07-15 14:36:31.466013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.466047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.477162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fef90 00:19:51.925 [2024-07-15 14:36:31.478713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.478744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.489402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e88f8 00:19:51.925 [2024-07-15 14:36:31.490423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.490457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.500987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f1ca0 00:19:51.925 [2024-07-15 14:36:31.501903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.501936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:51.925 [2024-07-15 14:36:31.515359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e5ec8 00:19:51.925 [2024-07-15 14:36:31.517391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.925 [2024-07-15 14:36:31.517423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.524069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e6738 00:19:52.184 [2024-07-15 14:36:31.525131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.525178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.536284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eb760 00:19:52.184 [2024-07-15 14:36:31.537334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.537382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.549546] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eff18 00:19:52.184 [2024-07-15 14:36:31.551107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.551154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.560507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e0a68 00:19:52.184 [2024-07-15 14:36:31.561823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.561870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.571922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f8a50 00:19:52.184 [2024-07-15 14:36:31.573033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.573080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.582838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ec840 00:19:52.184 [2024-07-15 14:36:31.583736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.583768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.596164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f1ca0 00:19:52.184 [2024-07-15 14:36:31.597548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.597595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.607214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f9f68 00:19:52.184 [2024-07-15 14:36:31.608407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.608454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.618785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f6cc8 00:19:52.184 [2024-07-15 14:36:31.619900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.619947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.633227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e2c28 00:19:52.184 [2024-07-15 14:36:31.635055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.635086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.641868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e3d08 00:19:52.184 [2024-07-15 14:36:31.642691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.642731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.655943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190dece0 00:19:52.184 [2024-07-15 14:36:31.657461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.657509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.667930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e8d30 00:19:52.184 [2024-07-15 14:36:31.669429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.669477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.681109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ebb98 00:19:52.184 [2024-07-15 14:36:31.683144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.683190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.689695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190de8a8 00:19:52.184 [2024-07-15 14:36:31.690713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.690760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.703912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f2510 00:19:52.184 [2024-07-15 14:36:31.705562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.705609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.714208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f9b30 00:19:52.184 [2024-07-15 14:36:31.716186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.716234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.726499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e88f8 00:19:52.184 [2024-07-15 14:36:31.727571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.727620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.737397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eaef0 00:19:52.184 [2024-07-15 14:36:31.738305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.738353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.748495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190edd58 00:19:52.184 [2024-07-15 14:36:31.749278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.749312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.759568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f0788 00:19:52.184 [2024-07-15 14:36:31.760163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.760196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:52.184 [2024-07-15 14:36:31.770215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eb328 00:19:52.184 [2024-07-15 14:36:31.770975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.184 [2024-07-15 14:36:31.771008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.782879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f2948 00:19:52.443 [2024-07-15 14:36:31.783850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.783911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.796985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fbcf0 00:19:52.443 [2024-07-15 14:36:31.798550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.798586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.808325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f8e88 00:19:52.443 [2024-07-15 14:36:31.809652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.809701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.819605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fb480 00:19:52.443 [2024-07-15 14:36:31.820744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.820798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.830446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ee190 00:19:52.443 [2024-07-15 14:36:31.831410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.831442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.841499] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e84c0 00:19:52.443 [2024-07-15 14:36:31.842301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.842336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.856261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e8d30 00:19:52.443 [2024-07-15 14:36:31.858049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.858114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.868056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f96f8 00:19:52.443 [2024-07-15 14:36:31.869797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.869831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.876630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190de8a8 00:19:52.443 [2024-07-15 14:36:31.877411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.877441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.891272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fd208 00:19:52.443 [2024-07-15 14:36:31.892730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.892771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.902451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eb328 00:19:52.443 [2024-07-15 14:36:31.903661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.903708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.914041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ed0b0 00:19:52.443 [2024-07-15 14:36:31.915234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.915280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.928273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e0ea0 00:19:52.443 [2024-07-15 14:36:31.930121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.930167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.940525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ea248 00:19:52.443 [2024-07-15 14:36:31.942408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.942439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.952011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f7970 00:19:52.443 [2024-07-15 14:36:31.953736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.953789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.960857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e95a0 00:19:52.443 [2024-07-15 14:36:31.961717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.961747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.973997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e38d0 00:19:52.443 [2024-07-15 14:36:31.975085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.975117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.985516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fc560 00:19:52.443 [2024-07-15 14:36:31.986419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.986450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:31.997088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fcdd0 00:19:52.443 [2024-07-15 14:36:31.998129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:31.998190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:32.011241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fac10 00:19:52.443 [2024-07-15 14:36:32.012943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:32.012975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:32.019672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e12d8 00:19:52.443 [2024-07-15 14:36:32.020397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:32.020428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:52.443 [2024-07-15 14:36:32.033689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f46d0 00:19:52.443 [2024-07-15 14:36:32.034946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.443 [2024-07-15 14:36:32.034977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:52.702 [2024-07-15 14:36:32.045077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e0a68 00:19:52.702 [2024-07-15 14:36:32.046158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.702 [2024-07-15 14:36:32.046205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:52.702 [2024-07-15 14:36:32.056326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f96f8 00:19:52.702 [2024-07-15 14:36:32.057261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.702 [2024-07-15 14:36:32.057294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:52.702 [2024-07-15 14:36:32.067609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eb328 00:19:52.702 [2024-07-15 14:36:32.068388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.068420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.082502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f20d8 00:19:52.703 [2024-07-15 14:36:32.084455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.084500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.090956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e38d0 00:19:52.703 [2024-07-15 14:36:32.091710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.091749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.103511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f4298 00:19:52.703 [2024-07-15 14:36:32.104814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.104859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.114585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e3498 00:19:52.703 [2024-07-15 14:36:32.115785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.115862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.125869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e6738 00:19:52.703 [2024-07-15 14:36:32.126819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.126852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.140261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f6020 00:19:52.703 [2024-07-15 14:36:32.141885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.141916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.152879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e7818 00:19:52.703 [2024-07-15 14:36:32.154555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.154588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.163246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fa7d8 00:19:52.703 [2024-07-15 14:36:32.164372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.164405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.178127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e6738 00:19:52.703 [2024-07-15 14:36:32.179878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.179926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.186865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fa3a0 00:19:52.703 [2024-07-15 14:36:32.187657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.187689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.198930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eb760 00:19:52.703 [2024-07-15 14:36:32.199712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.199751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.212526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f7da8 00:19:52.703 [2024-07-15 14:36:32.213858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.213904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.225026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fe720 00:19:52.703 [2024-07-15 14:36:32.226552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.226583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.236244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f6cc8 00:19:52.703 [2024-07-15 14:36:32.237465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.237527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.247805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f3a28 00:19:52.703 [2024-07-15 14:36:32.248786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.248848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.259009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e0630 00:19:52.703 [2024-07-15 14:36:32.259803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.259837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.273694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e7c50 00:19:52.703 [2024-07-15 14:36:32.275488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.275519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.281788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f6cc8 00:19:52.703 [2024-07-15 14:36:32.282632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.282663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:52.703 [2024-07-15 14:36:32.293598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190df550 00:19:52.703 [2024-07-15 14:36:32.294437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.703 [2024-07-15 14:36:32.294468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:52.962 [2024-07-15 14:36:32.307309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fbcf0 00:19:52.962 [2024-07-15 14:36:32.308880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.962 [2024-07-15 14:36:32.308926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:52.962 [2024-07-15 14:36:32.318897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f6020 00:19:52.963 [2024-07-15 14:36:32.320249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.320295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.330919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e3498 00:19:52.963 [2024-07-15 14:36:32.331771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.331802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.342060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eb760 00:19:52.963 [2024-07-15 14:36:32.342820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.342851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.353903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e49b0 00:19:52.963 [2024-07-15 14:36:32.354929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.354961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.364851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eaab8 00:19:52.963 [2024-07-15 14:36:32.365813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.365844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.379098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e8088 00:19:52.963 [2024-07-15 14:36:32.380719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.380750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.391457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e27f0 00:19:52.963 [2024-07-15 14:36:32.393246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.393277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.399926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ea248 00:19:52.963 [2024-07-15 14:36:32.400752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.400785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.414122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f92c0 00:19:52.963 [2024-07-15 14:36:32.415620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.415652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.425260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ee5c8 00:19:52.963 [2024-07-15 14:36:32.426479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.426512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.436909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ddc00 00:19:52.963 [2024-07-15 14:36:32.438142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.438174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.449064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e49b0 00:19:52.963 [2024-07-15 14:36:32.450286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.450327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.460803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e01f8 00:19:52.963 [2024-07-15 14:36:32.461511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.461543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.472149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fbcf0 00:19:52.963 [2024-07-15 14:36:32.472754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.472785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.485751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e88f8 00:19:52.963 [2024-07-15 14:36:32.487148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.487194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.495438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f7da8 00:19:52.963 [2024-07-15 14:36:32.496211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.496244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.508820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190efae0 00:19:52.963 [2024-07-15 14:36:32.510041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.510072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.522142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eaef0 00:19:52.963 [2024-07-15 14:36:32.523857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.523904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.530854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f2510 00:19:52.963 [2024-07-15 14:36:32.531594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.531625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.544957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190eb760 00:19:52.963 [2024-07-15 14:36:32.546228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.546274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:52.963 [2024-07-15 14:36:32.556271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e6300 00:19:52.963 [2024-07-15 14:36:32.557440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:52.963 [2024-07-15 14:36:32.557486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.570919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f6458 00:19:53.222 [2024-07-15 14:36:32.572895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.572926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.579415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ecc78 00:19:53.222 [2024-07-15 14:36:32.580376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.580407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.593245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fda78 00:19:53.222 [2024-07-15 14:36:32.594925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.594970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.603818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f2d80 00:19:53.222 [2024-07-15 14:36:32.605760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.605818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.616739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190fc998 00:19:53.222 [2024-07-15 14:36:32.618205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.618250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.627838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ecc78 00:19:53.222 [2024-07-15 14:36:32.629120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.629166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.638677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e7818 00:19:53.222 [2024-07-15 14:36:32.639909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.639956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.650494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f6020 00:19:53.222 [2024-07-15 14:36:32.651661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.651707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.665041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f5378 00:19:53.222 [2024-07-15 14:36:32.666852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.666897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.673628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f4f40 00:19:53.222 [2024-07-15 14:36:32.674477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.674508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.687944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f1430 00:19:53.222 [2024-07-15 14:36:32.689262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.689293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.698994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f0bc0 00:19:53.222 [2024-07-15 14:36:32.700265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.700298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.710676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ef6a8 00:19:53.222 [2024-07-15 14:36:32.711675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.711717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:53.222 [2024-07-15 14:36:32.722088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e7818 00:19:53.222 [2024-07-15 14:36:32.722949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.222 [2024-07-15 14:36:32.722979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:53.223 [2024-07-15 14:36:32.737319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e73e0 00:19:53.223 [2024-07-15 14:36:32.739178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.223 [2024-07-15 14:36:32.739209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.223 [2024-07-15 14:36:32.747237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e5658 00:19:53.223 [2024-07-15 14:36:32.748125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.223 [2024-07-15 14:36:32.748156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:53.223 [2024-07-15 14:36:32.759416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e9168 00:19:53.223 [2024-07-15 14:36:32.760799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.223 [2024-07-15 14:36:32.760828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:53.223 [2024-07-15 14:36:32.770456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e5ec8 00:19:53.223 [2024-07-15 14:36:32.771558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.223 [2024-07-15 14:36:32.771590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:53.223 [2024-07-15 14:36:32.782091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e1710 00:19:53.223 [2024-07-15 14:36:32.783171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.223 [2024-07-15 14:36:32.783203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:53.223 [2024-07-15 14:36:32.796360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190df118 00:19:53.223 [2024-07-15 14:36:32.798124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.223 [2024-07-15 14:36:32.798170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:53.223 [2024-07-15 14:36:32.804896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e9e10 00:19:53.223 [2024-07-15 14:36:32.805640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.223 [2024-07-15 14:36:32.805670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:53.482 [2024-07-15 14:36:32.819166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190ef6a8 00:19:53.482 [2024-07-15 14:36:32.820623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.482 [2024-07-15 14:36:32.820655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:53.482 [2024-07-15 14:36:32.830318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e2c28 00:19:53.482 [2024-07-15 14:36:32.831427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.482 [2024-07-15 14:36:32.831459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:53.482 [2024-07-15 14:36:32.841904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e4578 00:19:53.482 [2024-07-15 14:36:32.842900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.482 [2024-07-15 14:36:32.842931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:53.482 [2024-07-15 14:36:32.853091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f2510 00:19:53.482 [2024-07-15 14:36:32.853920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.482 [2024-07-15 14:36:32.853951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:53.482 [2024-07-15 14:36:32.867914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e01f8 00:19:53.482 [2024-07-15 14:36:32.869732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.482 [2024-07-15 14:36:32.869810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:53.482 [2024-07-15 14:36:32.876069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190e2c28 00:19:53.482 [2024-07-15 14:36:32.876906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.482 [2024-07-15 14:36:32.876952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:53.482 [2024-07-15 14:36:32.890274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612880) with pdu=0x2000190f2948 00:19:53.482 [2024-07-15 14:36:32.891707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.482 [2024-07-15 14:36:32.891796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:53.482 00:19:53.482 Latency(us) 00:19:53.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.482 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:53.482 nvme0n1 : 2.00 21338.34 83.35 0.00 0.00 5991.02 2412.92 16324.42 00:19:53.482 =================================================================================================================== 00:19:53.482 Total : 21338.34 83.35 0.00 0.00 5991.02 2412.92 16324.42 00:19:53.482 0 00:19:53.482 14:36:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:53.482 14:36:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:53.482 | .driver_specific 00:19:53.482 | .nvme_error 00:19:53.482 | .status_code 00:19:53.482 | .command_transient_transport_error' 00:19:53.482 14:36:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:53.482 14:36:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:53.740 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:19:53.740 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93725 00:19:53.740 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93725 ']' 00:19:53.740 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93725 00:19:53.740 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:53.741 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.741 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93725 00:19:53.741 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:53.741 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:53.741 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93725' 00:19:53.741 killing process with pid 93725 00:19:53.741 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93725 00:19:53.741 Received shutdown signal, test time was about 2.000000 seconds 00:19:53.741 00:19:53.741 Latency(us) 00:19:53.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.741 =================================================================================================================== 00:19:53.741 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.741 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93725 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93796 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93796 /var/tmp/bperf.sock 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93796 ']' 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.999 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:53.999 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:53.999 Zero copy mechanism will not be used. 00:19:53.999 [2024-07-15 14:36:33.424115] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:53.999 [2024-07-15 14:36:33.424207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93796 ] 00:19:53.999 [2024-07-15 14:36:33.558521] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.256 [2024-07-15 14:36:33.618614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.256 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.256 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:54.256 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:54.256 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:54.513 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:54.513 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.513 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:54.513 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.513 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:54.513 14:36:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:54.771 nvme0n1 00:19:54.771 14:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:54.771 14:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.771 14:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:54.771 14:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.771 14:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:54.771 14:36:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:55.029 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:55.029 Zero copy mechanism will not be used. 00:19:55.029 Running I/O for 2 seconds... 00:19:55.029 [2024-07-15 14:36:34.424285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.424613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.424644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.429696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.430040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.430071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.435126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.435432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.435455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.440319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.440610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.440641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.445482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.445785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.445814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.450662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.450975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.451002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.455871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.456189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.456217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.461127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.461426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.461455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.466388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.466677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.466714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.471717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.472059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.472087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.477028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.477354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.477383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.482420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.482740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.482768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.487867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.488163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.488191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.493134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.493422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.493450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.498371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.498676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.498716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.503595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.503899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.503928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.508811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.509111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.509139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.514019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.514317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.514346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.519295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.519586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.519614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.524617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.524928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.524956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.529881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.530172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.530200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.535040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.535328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.535357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.540303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.540608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.540636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.545809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.546108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.546138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.551255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.551609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.551653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.556590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.556910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.556938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.562035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.562371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.562399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.567410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.567770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.030 [2024-07-15 14:36:34.567795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.030 [2024-07-15 14:36:34.572839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.030 [2024-07-15 14:36:34.573145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.031 [2024-07-15 14:36:34.573172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.031 [2024-07-15 14:36:34.578021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.031 [2024-07-15 14:36:34.578354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.031 [2024-07-15 14:36:34.578382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.031 [2024-07-15 14:36:34.583151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.031 [2024-07-15 14:36:34.583439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.031 [2024-07-15 14:36:34.583466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.031 [2024-07-15 14:36:34.588313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.031 [2024-07-15 14:36:34.588600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.031 [2024-07-15 14:36:34.588627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.031 [2024-07-15 14:36:34.593426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.031 [2024-07-15 14:36:34.593772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.031 [2024-07-15 14:36:34.593798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.031 [2024-07-15 14:36:34.598679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.031 [2024-07-15 14:36:34.599012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.031 [2024-07-15 14:36:34.599039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.031 [2024-07-15 14:36:34.604028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.031 [2024-07-15 14:36:34.604351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.031 [2024-07-15 14:36:34.604380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.031 [2024-07-15 14:36:34.609362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.031 [2024-07-15 14:36:34.609651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.031 [2024-07-15 14:36:34.609679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.031 [2024-07-15 14:36:34.614808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.031 [2024-07-15 14:36:34.615112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.031 [2024-07-15 14:36:34.615139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.031 [2024-07-15 14:36:34.620159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.031 [2024-07-15 14:36:34.620494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.031 [2024-07-15 14:36:34.620522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.290 [2024-07-15 14:36:34.625575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.290 [2024-07-15 14:36:34.625876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.290 [2024-07-15 14:36:34.625904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.290 [2024-07-15 14:36:34.631028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.290 [2024-07-15 14:36:34.631316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.290 [2024-07-15 14:36:34.631360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.290 [2024-07-15 14:36:34.636411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.290 [2024-07-15 14:36:34.636735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.290 [2024-07-15 14:36:34.636771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.290 [2024-07-15 14:36:34.641589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.290 [2024-07-15 14:36:34.641935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.290 [2024-07-15 14:36:34.641963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.290 [2024-07-15 14:36:34.646680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.290 [2024-07-15 14:36:34.647031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.290 [2024-07-15 14:36:34.647057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.290 [2024-07-15 14:36:34.651805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.290 [2024-07-15 14:36:34.652092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.290 [2024-07-15 14:36:34.652120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.290 [2024-07-15 14:36:34.657348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.290 [2024-07-15 14:36:34.657638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.290 [2024-07-15 14:36:34.657666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.662836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.663166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.663194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.668213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.668552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.668580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.673525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.673878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.673907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.678886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.679176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.679202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.684028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.684309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.684351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.689121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.689410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.689451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.694181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.694505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.694534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.699333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.699612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.699639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.704624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.704942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.704971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.709950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.710279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.710331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.715186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.715469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.715496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.720572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.720875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.720903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.725737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.726048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.726078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.731323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.731633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.731664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.736518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.736822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.736849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.741829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.742163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.742192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.747142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.747464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.747491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.752393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.752683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.752720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.757644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.757981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.758009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.762901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.763193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.763220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.768189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.768524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.768553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.773760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.774060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.774088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.778932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.779223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.779251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.784155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.784453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.784481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.789232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.789534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.789561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.794424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.794739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.794779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.799611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.799936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.799963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.804931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.805229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.805256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.810043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.291 [2024-07-15 14:36:34.810420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.291 [2024-07-15 14:36:34.810450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.291 [2024-07-15 14:36:34.815254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.815561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.815590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.292 [2024-07-15 14:36:34.820489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.820788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.820816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.292 [2024-07-15 14:36:34.825679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.826043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.826072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.292 [2024-07-15 14:36:34.831095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.831379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.831407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.292 [2024-07-15 14:36:34.836222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.836532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.836560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.292 [2024-07-15 14:36:34.841466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.841800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.841829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.292 [2024-07-15 14:36:34.846696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.847003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.847032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.292 [2024-07-15 14:36:34.851756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.852037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.852064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.292 [2024-07-15 14:36:34.856847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.857160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.857190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.292 [2024-07-15 14:36:34.862009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.862330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.862373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.292 [2024-07-15 14:36:34.867098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.867375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.867404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.292 [2024-07-15 14:36:34.872186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.872483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.872513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.292 [2024-07-15 14:36:34.877348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.877650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.877680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.292 [2024-07-15 14:36:34.882713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.292 [2024-07-15 14:36:34.883030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.292 [2024-07-15 14:36:34.883077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.551 [2024-07-15 14:36:34.888070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.551 [2024-07-15 14:36:34.888374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-07-15 14:36:34.888407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.551 [2024-07-15 14:36:34.893455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.551 [2024-07-15 14:36:34.893796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-07-15 14:36:34.893826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.551 [2024-07-15 14:36:34.898799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.551 [2024-07-15 14:36:34.899122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-07-15 14:36:34.899152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.551 [2024-07-15 14:36:34.904097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.551 [2024-07-15 14:36:34.904401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-07-15 14:36:34.904435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.551 [2024-07-15 14:36:34.909398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.551 [2024-07-15 14:36:34.909699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-07-15 14:36:34.909740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.551 [2024-07-15 14:36:34.914801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.551 [2024-07-15 14:36:34.915120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-07-15 14:36:34.915150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.551 [2024-07-15 14:36:34.920338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.551 [2024-07-15 14:36:34.920663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-07-15 14:36:34.920692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.551 [2024-07-15 14:36:34.925860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.551 [2024-07-15 14:36:34.926209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.551 [2024-07-15 14:36:34.926239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:34.931419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:34.931732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:34.931756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:34.936905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:34.937269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:34.937300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:34.942293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:34.942617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:34.942648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:34.947543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:34.947861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:34.947891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:34.952847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:34.953133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:34.953163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:34.957989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:34.958280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:34.958333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:34.963290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:34.963642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:34.963675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:34.968906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:34.969256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:34.969286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:34.974205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:34.974540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:34.974573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:34.979457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:34.979803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:34.979828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:34.984847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:34.985153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:34.985200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:34.990287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:34.990610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:34.990656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:34.995515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:34.995818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:34.995847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.000721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.001063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.001093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.006068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.006396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.006428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.011589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.011921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.011950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.017063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.017371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.017402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.022587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.022954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.022984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.028054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.028354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.028385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.033497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.033847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.033877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.038974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.039319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.039350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.044405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.044721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.044760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.049695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.050045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.050074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.054876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.055163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.055192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.060235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.060549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.060580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.065632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.065992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.066024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.071258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.071602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.071634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.076509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.076826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.552 [2024-07-15 14:36:35.076853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.552 [2024-07-15 14:36:35.081874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.552 [2024-07-15 14:36:35.082227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-07-15 14:36:35.082254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.553 [2024-07-15 14:36:35.087262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.553 [2024-07-15 14:36:35.087595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-07-15 14:36:35.087627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.553 [2024-07-15 14:36:35.092803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.553 [2024-07-15 14:36:35.093115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-07-15 14:36:35.093145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.553 [2024-07-15 14:36:35.098279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.553 [2024-07-15 14:36:35.098600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-07-15 14:36:35.098632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.553 [2024-07-15 14:36:35.103839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.553 [2024-07-15 14:36:35.104160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-07-15 14:36:35.104191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.553 [2024-07-15 14:36:35.109251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.553 [2024-07-15 14:36:35.109580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-07-15 14:36:35.109612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.553 [2024-07-15 14:36:35.114785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.553 [2024-07-15 14:36:35.115122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-07-15 14:36:35.115153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.553 [2024-07-15 14:36:35.120359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.553 [2024-07-15 14:36:35.120658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-07-15 14:36:35.120705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.553 [2024-07-15 14:36:35.125660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.553 [2024-07-15 14:36:35.125995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-07-15 14:36:35.126026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.553 [2024-07-15 14:36:35.130987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.553 [2024-07-15 14:36:35.131304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-07-15 14:36:35.131334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.553 [2024-07-15 14:36:35.136318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.553 [2024-07-15 14:36:35.136614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-07-15 14:36:35.136645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.553 [2024-07-15 14:36:35.141549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.553 [2024-07-15 14:36:35.141900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.553 [2024-07-15 14:36:35.141930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.146977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.147330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.147361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.152421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.152716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.152754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.157744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.158057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.158088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.163080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.163386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.163417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.168318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.168636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.168678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.173438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.173762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.173793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.178521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.178861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.178890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.183877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.184199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.184226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.189120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.189447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.189474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.194455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.194773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.194800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.199677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.200000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.200027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.204991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.205308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.205356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.210439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.210819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.210849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.215853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.216169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.216199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.221216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.221533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.221566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.226491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.226846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.226877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.231813] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.232110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.232139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.237305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.237601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.813 [2024-07-15 14:36:35.237632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.813 [2024-07-15 14:36:35.242675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.813 [2024-07-15 14:36:35.242993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.243024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.247907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.248205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.248235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.253159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.253455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.253484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.258606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.258964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.258993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.263926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.264224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.264253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.269196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.269499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.269529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.274442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.274798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.274827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.279804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.280143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.280173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.285318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.285661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.285693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.290678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.291046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.291076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.296055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.296357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.296389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.301335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.301656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.301686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.306719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.307053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.307081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.311977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.312276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.312305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.317026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.317319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.317346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.322138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.322465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.322495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.327289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.327605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.327634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.332578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.332911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.332941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.338365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.338666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.338722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.343705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.344024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.344070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.349075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.349375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.349405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.354403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.354754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.354793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.359633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.359937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.359967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.364847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.365144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.365172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.370013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.370367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.370398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.375262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.375558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.375589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.380479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.380831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.380862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.385779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.386086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.386129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.391064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.391361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.391391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:55.814 [2024-07-15 14:36:35.396211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.814 [2024-07-15 14:36:35.396538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.814 [2024-07-15 14:36:35.396568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.815 [2024-07-15 14:36:35.401494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:55.815 [2024-07-15 14:36:35.401825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.815 [2024-07-15 14:36:35.401851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:55.815 [2024-07-15 14:36:35.407098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.407467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.407501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.412421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.412750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.412791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.417634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.417962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.417992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.422906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.423254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.423296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.428172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.428515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.428541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.433401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.433734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.433760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.438684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.439022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.439052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.444061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.444408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.444439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.449354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.449649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.449679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.454520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.454855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.454885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.459644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.459943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.459973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.464701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.465017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.465046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.469828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.470154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.470183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.475017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.475305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.475333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.480253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.480603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.480633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.485536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.485895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.485925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.490745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.491063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.491092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.496025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.496375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.496420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.501329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.501633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.501661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.506497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.506870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.506900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.511722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.512005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.512048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.516844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.517129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.517157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.074 [2024-07-15 14:36:35.521974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.074 [2024-07-15 14:36:35.522244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.074 [2024-07-15 14:36:35.522273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.527451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.527788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.527831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.532961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.533259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.533288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.538416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.538768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.538798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.543923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.544231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.544261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.549460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.549794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.549825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.554904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.555226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.555257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.560356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.560705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.560749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.565756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.566056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.566087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.571169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.571476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.571508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.576580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.576914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.576946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.582007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.582359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.582398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.587496] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.587839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.587873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.592848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.593150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.593180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.598162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.598506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.598537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.603462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.603789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.603819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.608631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.608962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.608992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.613824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.614154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.614184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.619055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.619358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.619389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.624219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.624547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.624579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.629476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.629793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.629822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.634678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.635010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.635043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.639863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.640198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.640230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.645355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.645657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.645689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.650734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.651071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.651097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.656131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.656445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.656476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.661593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.661911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.661942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.075 [2024-07-15 14:36:35.666888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.075 [2024-07-15 14:36:35.667188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.075 [2024-07-15 14:36:35.667219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.672122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.672431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.672461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.677338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.677669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.677709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.682561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.682879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.682910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.687854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.688186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.688216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.693178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.693492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.693524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.698580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.698903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.698934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.703929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.704260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.704290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.709250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.709565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.709597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.714544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.714857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.714889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.719868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.720187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.720218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.725054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.725355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.725387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.730284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.730597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.730628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.735512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.735829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.735859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.740684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.740997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.741028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.746005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.746355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.746386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.751235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.751539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.751572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.756503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.756832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.756858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.761759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.762064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.762096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.766975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.767291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.767324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.772237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.772556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.772589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.777527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.777861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.777887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.782748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.783069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.783101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.788015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.788334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.788364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.793212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.793534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.793567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.798440] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.798759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.798785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.803650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.803968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.804001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.808893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.809192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.809224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.814223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.814582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.814613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.819654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.820000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.820032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.824921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.825253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.825286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.830137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.830479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.830511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.835403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.835711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.835752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.840630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.840961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.840993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.845954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.846259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.846290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.851153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.851457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.851489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.856449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.856785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.856811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.861716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.862024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.862056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.867046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.867345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.867378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.872316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.872622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.872652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.877510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.877838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.877864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.882759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.883082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.883113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.887964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.888272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.888303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.893230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.893556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.893587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.898520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.898865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.898897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.903702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.904035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.904066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.908952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.909260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.909291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.914195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.354 [2024-07-15 14:36:35.914534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.354 [2024-07-15 14:36:35.914564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.354 [2024-07-15 14:36:35.919634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.355 [2024-07-15 14:36:35.920007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.355 [2024-07-15 14:36:35.920041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.355 [2024-07-15 14:36:35.924957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.355 [2024-07-15 14:36:35.925283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.355 [2024-07-15 14:36:35.925314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.355 [2024-07-15 14:36:35.930274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.355 [2024-07-15 14:36:35.930608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.355 [2024-07-15 14:36:35.930641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.355 [2024-07-15 14:36:35.935567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.355 [2024-07-15 14:36:35.935896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.355 [2024-07-15 14:36:35.935928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.355 [2024-07-15 14:36:35.940792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.355 [2024-07-15 14:36:35.941082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.355 [2024-07-15 14:36:35.941112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.355 [2024-07-15 14:36:35.946042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.355 [2024-07-15 14:36:35.946403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.355 [2024-07-15 14:36:35.946436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.615 [2024-07-15 14:36:35.951318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.615 [2024-07-15 14:36:35.951624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.615 [2024-07-15 14:36:35.951655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.615 [2024-07-15 14:36:35.958776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.615 [2024-07-15 14:36:35.959159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.615 [2024-07-15 14:36:35.959196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.615 [2024-07-15 14:36:35.964151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.615 [2024-07-15 14:36:35.964626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.615 [2024-07-15 14:36:35.964668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.615 [2024-07-15 14:36:35.969834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.615 [2024-07-15 14:36:35.970292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.615 [2024-07-15 14:36:35.970345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.615 [2024-07-15 14:36:35.975443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.615 [2024-07-15 14:36:35.975904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.615 [2024-07-15 14:36:35.975944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.615 [2024-07-15 14:36:35.980932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.615 [2024-07-15 14:36:35.981400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.615 [2024-07-15 14:36:35.981440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.615 [2024-07-15 14:36:35.986367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.615 [2024-07-15 14:36:35.986833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.615 [2024-07-15 14:36:35.986872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.615 [2024-07-15 14:36:35.991715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.615 [2024-07-15 14:36:35.992207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.615 [2024-07-15 14:36:35.992246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:35.997066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:35.997403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:35.997429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.002413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.002798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.002829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.007663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.008030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.008060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.012877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.013168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.013197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.017974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.018263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.018294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.023135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.023476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.023508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.028593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.028924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.028957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.034087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.034414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.034447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.039518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.039865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.039890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.045073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.045392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.045439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.050505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.050858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.050890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.055925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.056247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.056280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.061285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.061600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.061633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.066628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.066942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.066976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.071845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.072152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.072183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.077015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.077335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.077369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.082256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.082568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.082600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.087608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.087925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.087957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.092868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.093174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.093206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.098137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.616 [2024-07-15 14:36:36.098450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.616 [2024-07-15 14:36:36.098483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.616 [2024-07-15 14:36:36.103445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.103758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.103806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.108722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.109032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.109063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.114004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.114323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.114354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.119294] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.119611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.119642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.124515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.124854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.124887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.129796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.130121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.130154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.135094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.135411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.135444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.140407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.140752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.140783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.145638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.145978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.146009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.150891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.151197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.151229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.156146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.156458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.156491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.161447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.161769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.161800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.166797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.167097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.167129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.171994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.172300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.172348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.177196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.177500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.177533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.182441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.182765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.182790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.187591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.187910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.187942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.192862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.193171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.193197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.197998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.198341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.198366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.203538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.203861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.617 [2024-07-15 14:36:36.203888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.617 [2024-07-15 14:36:36.208791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.617 [2024-07-15 14:36:36.209094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.618 [2024-07-15 14:36:36.209128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.214022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.214337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.214369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.219338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.219660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.219692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.224579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.224894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.224920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.229837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.230146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.230190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.235108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.235407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.235432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.240327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.240631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.240664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.245615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.245937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.245968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.250775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.251073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.251103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.256012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.256335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.256364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.261344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.261665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.261708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.266610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.266925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.266958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.271899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.272225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.272256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.277118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.277414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.277444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.282497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.282834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.282860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.287756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.288063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.288096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.293004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.293311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.293338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.298255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.298568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.298593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.303522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.303845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.303872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.308804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.309120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.309148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.314104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.314453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.314482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.319536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.319878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.319907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.324903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.325235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.883 [2024-07-15 14:36:36.325264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.883 [2024-07-15 14:36:36.330179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.883 [2024-07-15 14:36:36.330507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.330535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.335418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.335743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.335771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.340709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.341014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.341046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.345912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.346220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.346252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.351144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.351457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.351483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.356379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.356714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.356746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.361635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.361956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.361990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.366869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.367182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.367213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.372150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.372464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.372496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.377399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.377740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.377772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.382676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.382994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.383027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.387888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.388192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.388224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.393086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.393389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.393415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.398260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.398578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.398604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.403457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.403791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.403818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:56.884 [2024-07-15 14:36:36.408686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1612bc0) with pdu=0x2000190fef90 00:19:56.884 [2024-07-15 14:36:36.409004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.884 [2024-07-15 14:36:36.409036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:56.884 00:19:56.884 Latency(us) 00:19:56.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.884 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:56.884 nvme0n1 : 2.00 5817.96 727.24 0.00 0.00 2743.95 2204.39 11736.90 00:19:56.884 =================================================================================================================== 00:19:56.884 Total : 5817.96 727.24 0.00 0.00 2743.95 2204.39 11736.90 00:19:56.884 0 00:19:56.884 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:56.884 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:56.884 | .driver_specific 00:19:56.884 | .nvme_error 00:19:56.884 | .status_code 00:19:56.884 | .command_transient_transport_error' 00:19:56.884 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:56.884 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:57.142 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 375 > 0 )) 00:19:57.142 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93796 00:19:57.142 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93796 ']' 00:19:57.142 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93796 00:19:57.142 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:57.142 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.142 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93796 00:19:57.142 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:57.142 killing process with pid 93796 00:19:57.142 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:57.142 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93796' 00:19:57.142 Received shutdown signal, test time was about 2.000000 seconds 00:19:57.142 00:19:57.142 Latency(us) 00:19:57.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.142 =================================================================================================================== 00:19:57.142 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.142 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93796 00:19:57.142 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93796 00:19:57.400 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93506 00:19:57.400 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93506 ']' 00:19:57.400 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93506 00:19:57.400 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:57.400 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.400 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93506 00:19:57.400 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:57.400 killing process with pid 93506 00:19:57.400 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:57.400 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93506' 00:19:57.400 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93506 00:19:57.400 14:36:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93506 00:19:57.658 00:19:57.658 real 0m16.811s 00:19:57.658 user 0m32.129s 00:19:57.658 sys 0m4.262s 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:57.658 ************************************ 00:19:57.658 END TEST nvmf_digest_error 00:19:57.658 ************************************ 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:57.658 rmmod nvme_tcp 00:19:57.658 rmmod nvme_fabrics 00:19:57.658 rmmod nvme_keyring 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93506 ']' 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93506 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 93506 ']' 00:19:57.658 Process with pid 93506 is not found 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 93506 00:19:57.658 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93506) - No such process 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 93506 is not found' 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.658 14:36:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.916 14:36:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:57.916 00:19:57.916 real 0m34.485s 00:19:57.916 user 1m4.892s 00:19:57.916 sys 0m8.717s 00:19:57.916 14:36:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:57.917 14:36:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:57.917 ************************************ 00:19:57.917 END TEST nvmf_digest 00:19:57.917 ************************************ 00:19:57.917 14:36:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:57.917 14:36:37 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:19:57.917 14:36:37 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:19:57.917 14:36:37 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:57.917 14:36:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:57.917 14:36:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.917 14:36:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:57.917 ************************************ 00:19:57.917 START TEST nvmf_mdns_discovery 00:19:57.917 ************************************ 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:57.917 * Looking for test storage... 00:19:57.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=de9cbd2d-f291-4e0a-9053-0006bfbcdd95 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:57.917 Cannot find device "nvmf_tgt_br" 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:57.917 Cannot find device "nvmf_tgt_br2" 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:57.917 Cannot find device "nvmf_tgt_br" 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:57.917 Cannot find device "nvmf_tgt_br2" 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:19:57.917 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:58.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:58.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:58.175 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:58.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:19:58.434 00:19:58.434 --- 10.0.0.2 ping statistics --- 00:19:58.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.434 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:58.434 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:58.434 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:19:58.434 00:19:58.434 --- 10.0.0.3 ping statistics --- 00:19:58.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.434 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:58.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:58.434 00:19:58.434 --- 10.0.0.1 ping statistics --- 00:19:58.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.434 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=94072 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 94072 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94072 ']' 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.434 14:36:37 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.434 [2024-07-15 14:36:37.865280] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:58.434 [2024-07-15 14:36:37.865411] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.434 [2024-07-15 14:36:38.008943] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.693 [2024-07-15 14:36:38.066920] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.693 [2024-07-15 14:36:38.066972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.693 [2024-07-15 14:36:38.066985] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.693 [2024-07-15 14:36:38.066994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.693 [2024-07-15 14:36:38.067002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.693 [2024-07-15 14:36:38.067039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.693 [2024-07-15 14:36:38.214626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.693 [2024-07-15 14:36:38.222773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.693 null0 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.693 null1 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.693 null2 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.693 null3 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.693 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94107 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94107 /tmp/host.sock 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94107 ']' 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.693 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:58.951 [2024-07-15 14:36:38.326094] Starting SPDK v24.09-pre git sha1 72fc6988f / DPDK 24.03.0 initialization... 00:19:58.951 [2024-07-15 14:36:38.326416] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94107 ] 00:19:58.951 [2024-07-15 14:36:38.462950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.951 [2024-07-15 14:36:38.519759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.208 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.208 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:59.208 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:19:59.208 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:19:59.208 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:19:59.208 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94124 00:19:59.208 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:19:59.208 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:19:59.208 14:36:38 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:19:59.208 Process 977 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:19:59.208 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:19:59.208 Successfully dropped root privileges. 00:19:59.208 avahi-daemon 0.8 starting up. 00:19:59.208 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:20:00.141 Successfully called chroot(). 00:20:00.141 Successfully dropped remaining capabilities. 00:20:00.141 No service file found in /etc/avahi/services. 00:20:00.141 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:00.141 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:20:00.141 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:00.141 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:20:00.141 Network interface enumeration completed. 00:20:00.141 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:20:00.141 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:20:00.141 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:20:00.141 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:20:00.141 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 1422186688. 00:20:00.141 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:00.141 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.141 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.141 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.141 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:00.141 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.141 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.141 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.141 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@90 -- # notify_id=0 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:20:00.400 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@104 -- # get_subsystem_names 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.401 14:36:39 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.659 [2024-07-15 14:36:40.031624] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@104 -- # [[ '' == '' ]] 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # get_bdev_list 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ '' == '' ]] 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.659 [2024-07-15 14:36:40.083283] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:20:00.659 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.660 [2024-07-15 14:36:40.123221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.660 [2024-07-15 14:36:40.131175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # rpc_cmd nvmf_publish_mdns_prr 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.660 14:36:40 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # sleep 5 00:20:01.594 [2024-07-15 14:36:40.931613] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:02.161 [2024-07-15 14:36:41.531634] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:02.161 [2024-07-15 14:36:41.531675] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:02.161 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:02.161 cookie is 0 00:20:02.161 is_local: 1 00:20:02.161 our_own: 0 00:20:02.161 wide_area: 0 00:20:02.161 multicast: 1 00:20:02.161 cached: 1 00:20:02.161 [2024-07-15 14:36:41.631629] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:02.161 [2024-07-15 14:36:41.631669] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:20:02.161 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:02.161 cookie is 0 00:20:02.161 is_local: 1 00:20:02.161 our_own: 0 00:20:02.161 wide_area: 0 00:20:02.161 multicast: 1 00:20:02.161 cached: 1 00:20:02.161 [2024-07-15 14:36:41.631698] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:20:02.161 [2024-07-15 14:36:41.731632] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:02.162 [2024-07-15 14:36:41.731669] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:02.162 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:02.162 cookie is 0 00:20:02.162 is_local: 1 00:20:02.162 our_own: 0 00:20:02.162 wide_area: 0 00:20:02.162 multicast: 1 00:20:02.162 cached: 1 00:20:02.420 [2024-07-15 14:36:41.831629] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:20:02.420 [2024-07-15 14:36:41.831668] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:20:02.420 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:02.420 cookie is 0 00:20:02.420 is_local: 1 00:20:02.420 our_own: 0 00:20:02.420 wide_area: 0 00:20:02.420 multicast: 1 00:20:02.420 cached: 1 00:20:02.420 [2024-07-15 14:36:41.831684] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:20:02.986 [2024-07-15 14:36:42.539199] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:02.986 [2024-07-15 14:36:42.539246] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:02.986 [2024-07-15 14:36:42.539283] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:03.245 [2024-07-15 14:36:42.625352] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:20:03.245 [2024-07-15 14:36:42.682557] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:20:03.245 [2024-07-15 14:36:42.682596] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:20:03.245 [2024-07-15 14:36:42.739076] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:03.245 [2024-07-15 14:36:42.739119] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:03.245 [2024-07-15 14:36:42.739157] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:03.245 [2024-07-15 14:36:42.825256] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:20:03.504 [2024-07-15 14:36:42.881559] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:03.504 [2024-07-15 14:36:42.881610] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_mdns_discovery_svcs 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ mdns == \m\d\n\s ]] 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_discovery_ctrlrs 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_subsystem_names 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # get_bdev_list 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@135 -- # get_subsystem_paths mdns0_nvme0 00:20:06.048 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@135 -- # [[ 4420 == \4\4\2\0 ]] 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@136 -- # get_subsystem_paths mdns1_nvme0 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@136 -- # [[ 4420 == \4\4\2\0 ]] 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # get_mdns_discovery_traddr 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # jq -r '.[].trid.traddr' 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # sort 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # xargs 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # [[ null == \1\0\.\0\.\0\.\2 ]] 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # trap - ERR 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # print_backtrace 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1155 -- # args=('--transport=tcp') 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1155 -- # local args 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1157 -- # xtrace_disable 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.049 ========== Backtrace start: ========== 00:20:06.049 00:20:06.049 in /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh:137 -> main(["--transport=tcp"]) 00:20:06.049 ... 00:20:06.049 132 [[ $(get_discovery_ctrlrs) == "mdns0_nvme mdns1_nvme" ]] 00:20:06.049 133 [[ "$(get_subsystem_names)" == "mdns0_nvme0 mdns1_nvme0" ]] 00:20:06.049 134 [[ "$(get_bdev_list)" == "mdns0_nvme0n1 mdns1_nvme0n1" ]] 00:20:06.049 135 [[ "$(get_subsystem_paths mdns0_nvme0)" == "$NVMF_PORT" ]] 00:20:06.049 136 [[ "$(get_subsystem_paths mdns1_nvme0)" == "$NVMF_PORT" ]] 00:20:06.049 => 137 [[ "$(get_mdns_discovery_traddr)" == "$NVMF_FIRST_TARGET_IP" ]] 00:20:06.049 138 get_notification_count 00:20:06.049 139 [[ $notification_count == 2 ]] 00:20:06.049 140 00:20:06.049 141 # Adding a namespace isn't a discovery function, but do it here anyways just to confirm we see a new bdev. 00:20:06.049 142 $rpc_py nvmf_subsystem_add_ns ${NQN}0 null1 00:20:06.049 ... 00:20:06.049 00:20:06.049 ========== Backtrace end ========== 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1194 -- # return 0 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@1 -- # process_shm --id 0 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@806 -- # type=--id 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@807 -- # id=0 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:06.049 nvmf_trace.0 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@821 -- # return 0 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@1 -- # nvmftestfini 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.049 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.049 rmmod nvme_tcp 00:20:06.049 rmmod nvme_fabrics 00:20:06.311 rmmod nvme_keyring 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 94072 ']' 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 94072 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 94072 ']' 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 94072 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94072 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:06.311 killing process with pid 94072 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94072' 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 94072 00:20:06.311 14:36:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 94072 00:20:06.311 [2024-07-15 14:36:45.693541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.693603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.693636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.693646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.693656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.693665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.693675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.693684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.693693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169d2a0 is same with the state(5) to be set 00:20:06.311 [2024-07-15 14:36:45.693758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.693781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.693798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.693808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.693818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.693828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.693842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.693856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.693871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e32f0 is same with the state(5) to be set 00:20:06.311 [2024-07-15 14:36:45.694487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169d2a0 (9): Bad file descriptor 00:20:06.311 [2024-07-15 14:36:45.694520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e32f0 (9): Bad file descriptor 00:20:06.311 [2024-07-15 14:36:45.699892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.699927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.699939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.699949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.699959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.699968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.699978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.699987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.699996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.311 [2024-07-15 14:36:45.702371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.702403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.702416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.702425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.702435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.702444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.702454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.311 [2024-07-15 14:36:45.702463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.311 [2024-07-15 14:36:45.702472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.311 [2024-07-15 14:36:45.709866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.311 [2024-07-15 14:36:45.712314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.311 [2024-07-15 14:36:45.719893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.312 [2024-07-15 14:36:45.720107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.312 [2024-07-15 14:36:45.720145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.312 [2024-07-15 14:36:45.720157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.312 [2024-07-15 14:36:45.720179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.312 [2024-07-15 14:36:45.720196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.312 [2024-07-15 14:36:45.720205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.312 [2024-07-15 14:36:45.720216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.312 [2024-07-15 14:36:45.720240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.312 [2024-07-15 14:36:45.722351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.312 [2024-07-15 14:36:45.722477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.312 [2024-07-15 14:36:45.722500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.312 [2024-07-15 14:36:45.722511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.312 [2024-07-15 14:36:45.722530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.312 [2024-07-15 14:36:45.722544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.312 [2024-07-15 14:36:45.722555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.312 [2024-07-15 14:36:45.722570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.312 [2024-07-15 14:36:45.722594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.312 [2024-07-15 14:36:45.730025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.312 [2024-07-15 14:36:45.730246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.312 [2024-07-15 14:36:45.730272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.312 [2024-07-15 14:36:45.730284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.312 [2024-07-15 14:36:45.730307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.312 [2024-07-15 14:36:45.730335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.312 [2024-07-15 14:36:45.730346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.312 [2024-07-15 14:36:45.730357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.312 [2024-07-15 14:36:45.730374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.312 [2024-07-15 14:36:45.732414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.312 [2024-07-15 14:36:45.732547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.312 [2024-07-15 14:36:45.732570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.312 [2024-07-15 14:36:45.732581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.312 [2024-07-15 14:36:45.732600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.312 [2024-07-15 14:36:45.732616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.312 [2024-07-15 14:36:45.732625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.312 [2024-07-15 14:36:45.732635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.312 [2024-07-15 14:36:45.732653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.312 [2024-07-15 14:36:45.740167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.312 [2024-07-15 14:36:45.740330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.312 [2024-07-15 14:36:45.740353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.312 [2024-07-15 14:36:45.740366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.312 [2024-07-15 14:36:45.740387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.312 [2024-07-15 14:36:45.740403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.312 [2024-07-15 14:36:45.740412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.312 [2024-07-15 14:36:45.740422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.312 [2024-07-15 14:36:45.740438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.312 [2024-07-15 14:36:45.742493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.312 [2024-07-15 14:36:45.742615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.312 [2024-07-15 14:36:45.742638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.312 [2024-07-15 14:36:45.742650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.312 [2024-07-15 14:36:45.742668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.312 [2024-07-15 14:36:45.742684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.312 [2024-07-15 14:36:45.742693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.312 [2024-07-15 14:36:45.742717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.312 [2024-07-15 14:36:45.742735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.312 [2024-07-15 14:36:45.750258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.312 [2024-07-15 14:36:45.750416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.312 [2024-07-15 14:36:45.750441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.312 [2024-07-15 14:36:45.750454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.312 [2024-07-15 14:36:45.750475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.312 [2024-07-15 14:36:45.750492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.312 [2024-07-15 14:36:45.750501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.312 [2024-07-15 14:36:45.750511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.312 [2024-07-15 14:36:45.750527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.312 [2024-07-15 14:36:45.752560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.312 [2024-07-15 14:36:45.752670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.312 [2024-07-15 14:36:45.752692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.312 [2024-07-15 14:36:45.752716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.312 [2024-07-15 14:36:45.752734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.312 [2024-07-15 14:36:45.752749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.312 [2024-07-15 14:36:45.752758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.312 [2024-07-15 14:36:45.752767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.312 [2024-07-15 14:36:45.752782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.312 [2024-07-15 14:36:45.760357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.312 [2024-07-15 14:36:45.760550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.312 [2024-07-15 14:36:45.760574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.312 [2024-07-15 14:36:45.760587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.313 [2024-07-15 14:36:45.760608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.313 [2024-07-15 14:36:45.760624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.313 [2024-07-15 14:36:45.760633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.313 [2024-07-15 14:36:45.760644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.313 [2024-07-15 14:36:45.760660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.313 [2024-07-15 14:36:45.762632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.313 [2024-07-15 14:36:45.762747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.313 [2024-07-15 14:36:45.762768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.313 [2024-07-15 14:36:45.762779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.313 [2024-07-15 14:36:45.762796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.313 [2024-07-15 14:36:45.762811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.313 [2024-07-15 14:36:45.762819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.313 [2024-07-15 14:36:45.762828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.313 [2024-07-15 14:36:45.762843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.313 [2024-07-15 14:36:45.770471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.313 [2024-07-15 14:36:45.770559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.313 [2024-07-15 14:36:45.770580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.313 [2024-07-15 14:36:45.770591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.313 [2024-07-15 14:36:45.770607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.313 [2024-07-15 14:36:45.770622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.313 [2024-07-15 14:36:45.770631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.313 [2024-07-15 14:36:45.770639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.313 [2024-07-15 14:36:45.770654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.313 [2024-07-15 14:36:45.772697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.313 [2024-07-15 14:36:45.772819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.313 [2024-07-15 14:36:45.772839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.313 [2024-07-15 14:36:45.772850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.313 [2024-07-15 14:36:45.772866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.313 [2024-07-15 14:36:45.772881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.313 [2024-07-15 14:36:45.772890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.313 [2024-07-15 14:36:45.772899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.313 [2024-07-15 14:36:45.772913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.313 [2024-07-15 14:36:45.780526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.313 [2024-07-15 14:36:45.780651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.313 [2024-07-15 14:36:45.780672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.313 [2024-07-15 14:36:45.780683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.313 [2024-07-15 14:36:45.780699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.313 [2024-07-15 14:36:45.780742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.313 [2024-07-15 14:36:45.780752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.313 [2024-07-15 14:36:45.780761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.313 [2024-07-15 14:36:45.780777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.313 [2024-07-15 14:36:45.782789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.313 [2024-07-15 14:36:45.782904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.313 [2024-07-15 14:36:45.782925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.313 [2024-07-15 14:36:45.782936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.313 [2024-07-15 14:36:45.782952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.313 [2024-07-15 14:36:45.782966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.313 [2024-07-15 14:36:45.782975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.313 [2024-07-15 14:36:45.782984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.313 [2024-07-15 14:36:45.782998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.313 [2024-07-15 14:36:45.790600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.313 [2024-07-15 14:36:45.790715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.313 [2024-07-15 14:36:45.790737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.313 [2024-07-15 14:36:45.790748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.313 [2024-07-15 14:36:45.790766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.313 [2024-07-15 14:36:45.790780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.313 [2024-07-15 14:36:45.790789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.313 [2024-07-15 14:36:45.790798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.313 [2024-07-15 14:36:45.790813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.313 [2024-07-15 14:36:45.792856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.313 [2024-07-15 14:36:45.792972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.313 [2024-07-15 14:36:45.792992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.313 [2024-07-15 14:36:45.793003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.313 [2024-07-15 14:36:45.793019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.313 [2024-07-15 14:36:45.793034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.313 [2024-07-15 14:36:45.793042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.313 [2024-07-15 14:36:45.793051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.313 [2024-07-15 14:36:45.793065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.313 [2024-07-15 14:36:45.800667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.313 [2024-07-15 14:36:45.800812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.313 [2024-07-15 14:36:45.800833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.313 [2024-07-15 14:36:45.800845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.313 [2024-07-15 14:36:45.800861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.313 [2024-07-15 14:36:45.800876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.313 [2024-07-15 14:36:45.800884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.313 [2024-07-15 14:36:45.800893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.313 [2024-07-15 14:36:45.800908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.314 [2024-07-15 14:36:45.802925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.314 [2024-07-15 14:36:45.803037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.314 [2024-07-15 14:36:45.803057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.314 [2024-07-15 14:36:45.803067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.314 [2024-07-15 14:36:45.803083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.314 [2024-07-15 14:36:45.803097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.314 [2024-07-15 14:36:45.803105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.314 [2024-07-15 14:36:45.803114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.314 [2024-07-15 14:36:45.803128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.314 [2024-07-15 14:36:45.810779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.314 [2024-07-15 14:36:45.810900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.314 [2024-07-15 14:36:45.810921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.314 [2024-07-15 14:36:45.810932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.314 [2024-07-15 14:36:45.810949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.314 [2024-07-15 14:36:45.810963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.314 [2024-07-15 14:36:45.810971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.314 [2024-07-15 14:36:45.810980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.314 [2024-07-15 14:36:45.810995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.314 [2024-07-15 14:36:45.812993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.314 [2024-07-15 14:36:45.813115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.314 [2024-07-15 14:36:45.813135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.314 [2024-07-15 14:36:45.813146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.314 [2024-07-15 14:36:45.813162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.314 [2024-07-15 14:36:45.813176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.314 [2024-07-15 14:36:45.813185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.314 [2024-07-15 14:36:45.813194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.314 [2024-07-15 14:36:45.813208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.314 [2024-07-15 14:36:45.820851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.314 [2024-07-15 14:36:45.820936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.314 [2024-07-15 14:36:45.820957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.314 [2024-07-15 14:36:45.820968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.314 [2024-07-15 14:36:45.820984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.314 [2024-07-15 14:36:45.820998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.314 [2024-07-15 14:36:45.821007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.314 [2024-07-15 14:36:45.821016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.314 [2024-07-15 14:36:45.821030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.314 [2024-07-15 14:36:45.823079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.314 [2024-07-15 14:36:45.823163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.314 [2024-07-15 14:36:45.823184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.314 [2024-07-15 14:36:45.823195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.314 [2024-07-15 14:36:45.823211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.314 [2024-07-15 14:36:45.823225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.314 [2024-07-15 14:36:45.823234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.314 [2024-07-15 14:36:45.823243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.314 [2024-07-15 14:36:45.823258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.314 [2024-07-15 14:36:45.830909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.314 [2024-07-15 14:36:45.830999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.314 [2024-07-15 14:36:45.831020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.314 [2024-07-15 14:36:45.831031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.314 [2024-07-15 14:36:45.831047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.314 [2024-07-15 14:36:45.831061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.314 [2024-07-15 14:36:45.831070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.314 [2024-07-15 14:36:45.831079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.314 [2024-07-15 14:36:45.831094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.314 [2024-07-15 14:36:45.833135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.314 [2024-07-15 14:36:45.833223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.314 [2024-07-15 14:36:45.833244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.314 [2024-07-15 14:36:45.833255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.314 [2024-07-15 14:36:45.833271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.314 [2024-07-15 14:36:45.833285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.314 [2024-07-15 14:36:45.833294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.314 [2024-07-15 14:36:45.833303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.314 [2024-07-15 14:36:45.833317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.314 [2024-07-15 14:36:45.840969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.314 [2024-07-15 14:36:45.841055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.314 [2024-07-15 14:36:45.841076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.314 [2024-07-15 14:36:45.841086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.314 [2024-07-15 14:36:45.841102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.314 [2024-07-15 14:36:45.841117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.315 [2024-07-15 14:36:45.841125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.315 [2024-07-15 14:36:45.841134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.315 [2024-07-15 14:36:45.841149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.315 [2024-07-15 14:36:45.843191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.315 [2024-07-15 14:36:45.843304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.315 [2024-07-15 14:36:45.843325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.315 [2024-07-15 14:36:45.843335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.315 [2024-07-15 14:36:45.843351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.315 [2024-07-15 14:36:45.843366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.315 [2024-07-15 14:36:45.843375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.315 [2024-07-15 14:36:45.843383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.315 [2024-07-15 14:36:45.843398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.315 [2024-07-15 14:36:45.851026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.315 [2024-07-15 14:36:45.851112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.315 [2024-07-15 14:36:45.851133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.315 [2024-07-15 14:36:45.851143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.315 [2024-07-15 14:36:45.851160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.315 [2024-07-15 14:36:45.851175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.315 [2024-07-15 14:36:45.851183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.315 [2024-07-15 14:36:45.851193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.315 [2024-07-15 14:36:45.851208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.315 [2024-07-15 14:36:45.853263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.315 [2024-07-15 14:36:45.853363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.315 [2024-07-15 14:36:45.853383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.315 [2024-07-15 14:36:45.853393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.315 [2024-07-15 14:36:45.853409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.315 [2024-07-15 14:36:45.853423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.315 [2024-07-15 14:36:45.853432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.315 [2024-07-15 14:36:45.853441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.315 [2024-07-15 14:36:45.853455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.315 [2024-07-15 14:36:45.861082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.315 [2024-07-15 14:36:45.861200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.315 [2024-07-15 14:36:45.861220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.315 [2024-07-15 14:36:45.861231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.315 [2024-07-15 14:36:45.861247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.315 [2024-07-15 14:36:45.861261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.315 [2024-07-15 14:36:45.861270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.315 [2024-07-15 14:36:45.861281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.315 [2024-07-15 14:36:45.861296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.315 [2024-07-15 14:36:45.863332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.315 [2024-07-15 14:36:45.863417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.315 [2024-07-15 14:36:45.863444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.315 [2024-07-15 14:36:45.863455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.315 [2024-07-15 14:36:45.863471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.315 [2024-07-15 14:36:45.863485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.315 [2024-07-15 14:36:45.863494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.315 [2024-07-15 14:36:45.863503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.315 [2024-07-15 14:36:45.863518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.315 [2024-07-15 14:36:45.871170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.315 [2024-07-15 14:36:45.871259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.315 [2024-07-15 14:36:45.871280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.315 [2024-07-15 14:36:45.871292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.315 [2024-07-15 14:36:45.871308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.315 [2024-07-15 14:36:45.871322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.315 [2024-07-15 14:36:45.871331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.315 [2024-07-15 14:36:45.871340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.315 [2024-07-15 14:36:45.871354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.315 [2024-07-15 14:36:45.873387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.315 [2024-07-15 14:36:45.873472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.315 [2024-07-15 14:36:45.873492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.315 [2024-07-15 14:36:45.873502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.315 [2024-07-15 14:36:45.873519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.315 [2024-07-15 14:36:45.873533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.315 [2024-07-15 14:36:45.873542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.315 [2024-07-15 14:36:45.873551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.315 [2024-07-15 14:36:45.873565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.315 [2024-07-15 14:36:45.881229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.315 [2024-07-15 14:36:45.881344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.315 [2024-07-15 14:36:45.881367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.315 [2024-07-15 14:36:45.881378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.316 [2024-07-15 14:36:45.881393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.316 [2024-07-15 14:36:45.881408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.316 [2024-07-15 14:36:45.881417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.316 [2024-07-15 14:36:45.881426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.316 [2024-07-15 14:36:45.881441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.316 [2024-07-15 14:36:45.883441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.316 [2024-07-15 14:36:45.883537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.316 [2024-07-15 14:36:45.883558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.316 [2024-07-15 14:36:45.883570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.316 [2024-07-15 14:36:45.883586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.316 [2024-07-15 14:36:45.883611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.316 [2024-07-15 14:36:45.883622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.316 [2024-07-15 14:36:45.883631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.316 [2024-07-15 14:36:45.883646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.316 [2024-07-15 14:36:45.891305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.316 [2024-07-15 14:36:45.891409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.316 [2024-07-15 14:36:45.891429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.316 [2024-07-15 14:36:45.891440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.316 [2024-07-15 14:36:45.891457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.316 [2024-07-15 14:36:45.891476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.316 [2024-07-15 14:36:45.891485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.316 [2024-07-15 14:36:45.891494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.316 [2024-07-15 14:36:45.891508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.316 [2024-07-15 14:36:45.893498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.316 [2024-07-15 14:36:45.893582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.316 [2024-07-15 14:36:45.893603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.316 [2024-07-15 14:36:45.893613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.316 [2024-07-15 14:36:45.893629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.316 [2024-07-15 14:36:45.893643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.316 [2024-07-15 14:36:45.893652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.316 [2024-07-15 14:36:45.893661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.316 [2024-07-15 14:36:45.893675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.316 [2024-07-15 14:36:45.901377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.316 [2024-07-15 14:36:45.901464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.316 [2024-07-15 14:36:45.901485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.316 [2024-07-15 14:36:45.901495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.316 [2024-07-15 14:36:45.901513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.316 [2024-07-15 14:36:45.901527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.316 [2024-07-15 14:36:45.901536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.316 [2024-07-15 14:36:45.901545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.316 [2024-07-15 14:36:45.901559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.316 [2024-07-15 14:36:45.903552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.316 [2024-07-15 14:36:45.903636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.316 [2024-07-15 14:36:45.903658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.316 [2024-07-15 14:36:45.903668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.316 [2024-07-15 14:36:45.903706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.316 [2024-07-15 14:36:45.903725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.578 [2024-07-15 14:36:45.903741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.578 [2024-07-15 14:36:45.903750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.578 [2024-07-15 14:36:45.903765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.578 [2024-07-15 14:36:45.911433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.578 [2024-07-15 14:36:45.911533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.578 [2024-07-15 14:36:45.911553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.578 [2024-07-15 14:36:45.911564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.578 [2024-07-15 14:36:45.911580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.578 [2024-07-15 14:36:45.911594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.578 [2024-07-15 14:36:45.911603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.578 [2024-07-15 14:36:45.911612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.578 [2024-07-15 14:36:45.911627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.578 [2024-07-15 14:36:45.913607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.578 [2024-07-15 14:36:45.913692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.578 [2024-07-15 14:36:45.913727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.578 [2024-07-15 14:36:45.913739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.578 [2024-07-15 14:36:45.913756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.578 [2024-07-15 14:36:45.913770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.578 [2024-07-15 14:36:45.913778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.578 [2024-07-15 14:36:45.913787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.578 [2024-07-15 14:36:45.913802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.578 [2024-07-15 14:36:45.921504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.578 [2024-07-15 14:36:45.921590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.578 [2024-07-15 14:36:45.921611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.578 [2024-07-15 14:36:45.921622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.578 [2024-07-15 14:36:45.921638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.578 [2024-07-15 14:36:45.921653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.578 [2024-07-15 14:36:45.921661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.578 [2024-07-15 14:36:45.921670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.578 [2024-07-15 14:36:45.921685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.578 [2024-07-15 14:36:45.923666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.578 [2024-07-15 14:36:45.923774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.578 [2024-07-15 14:36:45.923803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.578 [2024-07-15 14:36:45.923819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.578 [2024-07-15 14:36:45.923836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.578 [2024-07-15 14:36:45.923852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.578 [2024-07-15 14:36:45.923868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.578 [2024-07-15 14:36:45.923882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.578 [2024-07-15 14:36:45.923904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.578 [2024-07-15 14:36:45.931562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.578 [2024-07-15 14:36:45.931647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.578 [2024-07-15 14:36:45.931668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.578 [2024-07-15 14:36:45.931678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.578 [2024-07-15 14:36:45.931706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.578 [2024-07-15 14:36:45.931743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.578 [2024-07-15 14:36:45.931754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.578 [2024-07-15 14:36:45.931763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.578 [2024-07-15 14:36:45.931779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.578 [2024-07-15 14:36:45.933743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.578 [2024-07-15 14:36:45.933853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.578 [2024-07-15 14:36:45.933875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.578 [2024-07-15 14:36:45.933885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.578 [2024-07-15 14:36:45.933902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.578 [2024-07-15 14:36:45.933916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.578 [2024-07-15 14:36:45.933925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.578 [2024-07-15 14:36:45.933934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.578 [2024-07-15 14:36:45.933948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.578 [2024-07-15 14:36:45.941618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.578 [2024-07-15 14:36:45.941724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.578 [2024-07-15 14:36:45.941747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.578 [2024-07-15 14:36:45.941758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.578 [2024-07-15 14:36:45.941775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.578 [2024-07-15 14:36:45.941789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.578 [2024-07-15 14:36:45.941798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.578 [2024-07-15 14:36:45.941807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.578 [2024-07-15 14:36:45.941822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.578 [2024-07-15 14:36:45.943821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.578 [2024-07-15 14:36:45.943909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.578 [2024-07-15 14:36:45.943929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.578 [2024-07-15 14:36:45.943940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.578 [2024-07-15 14:36:45.943957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.578 [2024-07-15 14:36:45.943971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.578 [2024-07-15 14:36:45.943980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.578 [2024-07-15 14:36:45.943989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.578 [2024-07-15 14:36:45.944003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.578 [2024-07-15 14:36:45.951677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.578 [2024-07-15 14:36:45.951787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.578 [2024-07-15 14:36:45.951809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.578 [2024-07-15 14:36:45.951820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.578 [2024-07-15 14:36:45.951846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.578 [2024-07-15 14:36:45.951862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.578 [2024-07-15 14:36:45.951871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.578 [2024-07-15 14:36:45.951880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.578 [2024-07-15 14:36:45.951895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.578 [2024-07-15 14:36:45.953877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.578 [2024-07-15 14:36:45.953979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.578 [2024-07-15 14:36:45.954000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.579 [2024-07-15 14:36:45.954010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.579 [2024-07-15 14:36:45.954028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.579 [2024-07-15 14:36:45.954041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.579 [2024-07-15 14:36:45.954050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.579 [2024-07-15 14:36:45.954059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.579 [2024-07-15 14:36:45.954073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.579 [2024-07-15 14:36:45.961756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.579 [2024-07-15 14:36:45.961843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.579 [2024-07-15 14:36:45.961863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.579 [2024-07-15 14:36:45.961874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.579 [2024-07-15 14:36:45.961890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.579 [2024-07-15 14:36:45.961904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.579 [2024-07-15 14:36:45.961913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.579 [2024-07-15 14:36:45.961922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.579 [2024-07-15 14:36:45.961937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.579 [2024-07-15 14:36:45.963947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.579 [2024-07-15 14:36:45.964032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.579 [2024-07-15 14:36:45.964052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.579 [2024-07-15 14:36:45.964063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.579 [2024-07-15 14:36:45.964079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.579 [2024-07-15 14:36:45.964093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.579 [2024-07-15 14:36:45.964102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.579 [2024-07-15 14:36:45.964111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.579 [2024-07-15 14:36:45.964126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.579 [2024-07-15 14:36:45.971813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.579 [2024-07-15 14:36:45.971898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.579 [2024-07-15 14:36:45.971918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.579 [2024-07-15 14:36:45.971928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.579 [2024-07-15 14:36:45.971945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.579 [2024-07-15 14:36:45.971959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.579 [2024-07-15 14:36:45.971968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.579 [2024-07-15 14:36:45.971977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.579 [2024-07-15 14:36:45.971991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.579 [2024-07-15 14:36:45.974000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.579 [2024-07-15 14:36:45.974082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.579 [2024-07-15 14:36:45.974102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.579 [2024-07-15 14:36:45.974113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.579 [2024-07-15 14:36:45.974129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.579 [2024-07-15 14:36:45.974143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.579 [2024-07-15 14:36:45.974151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.579 [2024-07-15 14:36:45.974160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.579 [2024-07-15 14:36:45.974175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.579 [2024-07-15 14:36:45.981869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.579 [2024-07-15 14:36:45.981970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.579 [2024-07-15 14:36:45.981991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.579 [2024-07-15 14:36:45.982001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.579 [2024-07-15 14:36:45.982018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.579 [2024-07-15 14:36:45.982032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.579 [2024-07-15 14:36:45.982041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.579 [2024-07-15 14:36:45.982050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.579 [2024-07-15 14:36:45.982064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.579 [2024-07-15 14:36:45.984052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.579 [2024-07-15 14:36:45.984166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.579 [2024-07-15 14:36:45.984193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.579 [2024-07-15 14:36:45.984205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.579 [2024-07-15 14:36:45.984222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.579 [2024-07-15 14:36:45.984236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.579 [2024-07-15 14:36:45.984244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.579 [2024-07-15 14:36:45.984254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.579 [2024-07-15 14:36:45.984269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.579 [2024-07-15 14:36:45.991939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.579 [2024-07-15 14:36:45.992037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.579 [2024-07-15 14:36:45.992059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.579 [2024-07-15 14:36:45.992071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.579 [2024-07-15 14:36:45.992089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.579 [2024-07-15 14:36:45.992103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.579 [2024-07-15 14:36:45.992116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.579 [2024-07-15 14:36:45.992130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.579 [2024-07-15 14:36:45.992152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.579 [2024-07-15 14:36:45.994128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.579 [2024-07-15 14:36:45.994227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.579 [2024-07-15 14:36:45.994248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.579 [2024-07-15 14:36:45.994258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.579 [2024-07-15 14:36:45.994275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.579 [2024-07-15 14:36:45.994289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.579 [2024-07-15 14:36:45.994298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.579 [2024-07-15 14:36:45.994306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.579 [2024-07-15 14:36:45.994332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.579 [2024-07-15 14:36:46.001994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.579 [2024-07-15 14:36:46.002096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.579 [2024-07-15 14:36:46.002116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.579 [2024-07-15 14:36:46.002127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.579 [2024-07-15 14:36:46.002144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.579 [2024-07-15 14:36:46.002158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.579 [2024-07-15 14:36:46.002167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.579 [2024-07-15 14:36:46.002175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.579 [2024-07-15 14:36:46.002190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.579 [2024-07-15 14:36:46.004212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.579 [2024-07-15 14:36:46.004312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.579 [2024-07-15 14:36:46.004332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.579 [2024-07-15 14:36:46.004343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.579 [2024-07-15 14:36:46.004359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.579 [2024-07-15 14:36:46.004373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.579 [2024-07-15 14:36:46.004382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.579 [2024-07-15 14:36:46.004391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.579 [2024-07-15 14:36:46.004405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.579 [2024-07-15 14:36:46.012065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.580 [2024-07-15 14:36:46.012166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.580 [2024-07-15 14:36:46.012187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.580 [2024-07-15 14:36:46.012198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.580 [2024-07-15 14:36:46.012214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.580 [2024-07-15 14:36:46.012228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.580 [2024-07-15 14:36:46.012238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.580 [2024-07-15 14:36:46.012246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.580 [2024-07-15 14:36:46.012261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.580 [2024-07-15 14:36:46.014281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.580 [2024-07-15 14:36:46.014406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.580 [2024-07-15 14:36:46.014427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.580 [2024-07-15 14:36:46.014437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.580 [2024-07-15 14:36:46.014453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.580 [2024-07-15 14:36:46.014467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.580 [2024-07-15 14:36:46.014476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.580 [2024-07-15 14:36:46.014486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.580 [2024-07-15 14:36:46.014501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.580 [2024-07-15 14:36:46.022136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.580 [2024-07-15 14:36:46.022222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.580 [2024-07-15 14:36:46.022243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.580 [2024-07-15 14:36:46.022254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.580 [2024-07-15 14:36:46.022270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.580 [2024-07-15 14:36:46.022285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.580 [2024-07-15 14:36:46.022294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.580 [2024-07-15 14:36:46.022303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.580 [2024-07-15 14:36:46.022327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.580 [2024-07-15 14:36:46.024377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.580 [2024-07-15 14:36:46.024461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.580 [2024-07-15 14:36:46.024481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.580 [2024-07-15 14:36:46.024492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.580 [2024-07-15 14:36:46.024509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.580 [2024-07-15 14:36:46.024523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.580 [2024-07-15 14:36:46.024531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.580 [2024-07-15 14:36:46.024540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.580 [2024-07-15 14:36:46.024555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.580 [2024-07-15 14:36:46.032192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.580 [2024-07-15 14:36:46.032294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.580 [2024-07-15 14:36:46.032315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.580 [2024-07-15 14:36:46.032325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.580 [2024-07-15 14:36:46.032342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.580 [2024-07-15 14:36:46.032356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.580 [2024-07-15 14:36:46.032364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.580 [2024-07-15 14:36:46.032374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.580 [2024-07-15 14:36:46.032388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.580 [2024-07-15 14:36:46.034431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.580 [2024-07-15 14:36:46.034517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.580 [2024-07-15 14:36:46.034537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.580 [2024-07-15 14:36:46.034548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.580 [2024-07-15 14:36:46.034564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.580 [2024-07-15 14:36:46.034578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.580 [2024-07-15 14:36:46.034587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.580 [2024-07-15 14:36:46.034596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.580 [2024-07-15 14:36:46.034610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.580 [2024-07-15 14:36:46.042263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.580 [2024-07-15 14:36:46.042358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.580 [2024-07-15 14:36:46.042379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.580 [2024-07-15 14:36:46.042390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.580 [2024-07-15 14:36:46.042407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.580 [2024-07-15 14:36:46.042421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.580 [2024-07-15 14:36:46.042430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.580 [2024-07-15 14:36:46.042439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.580 [2024-07-15 14:36:46.042453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.580 [2024-07-15 14:36:46.044487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.580 [2024-07-15 14:36:46.044572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.580 [2024-07-15 14:36:46.044592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.580 [2024-07-15 14:36:46.044603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.580 [2024-07-15 14:36:46.044620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.580 [2024-07-15 14:36:46.044634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.580 [2024-07-15 14:36:46.044643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.580 [2024-07-15 14:36:46.044652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.580 [2024-07-15 14:36:46.044666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.580 [2024-07-15 14:36:46.052325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.580 [2024-07-15 14:36:46.052422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.580 [2024-07-15 14:36:46.052443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.580 [2024-07-15 14:36:46.052455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.580 [2024-07-15 14:36:46.052471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.580 [2024-07-15 14:36:46.052486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.580 [2024-07-15 14:36:46.052495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.580 [2024-07-15 14:36:46.052504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.580 [2024-07-15 14:36:46.052519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.580 [2024-07-15 14:36:46.054542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.580 [2024-07-15 14:36:46.054629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.580 [2024-07-15 14:36:46.054650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.580 [2024-07-15 14:36:46.054661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.580 [2024-07-15 14:36:46.054678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.580 [2024-07-15 14:36:46.054693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.580 [2024-07-15 14:36:46.054727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.580 [2024-07-15 14:36:46.054737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.580 [2024-07-15 14:36:46.054752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.580 [2024-07-15 14:36:46.062385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.580 [2024-07-15 14:36:46.062473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.580 [2024-07-15 14:36:46.062494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.580 [2024-07-15 14:36:46.062504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.580 [2024-07-15 14:36:46.062521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.580 [2024-07-15 14:36:46.062535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.580 [2024-07-15 14:36:46.062544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.580 [2024-07-15 14:36:46.062553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.580 [2024-07-15 14:36:46.062567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.581 [2024-07-15 14:36:46.064598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.581 [2024-07-15 14:36:46.064682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.581 [2024-07-15 14:36:46.064714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.581 [2024-07-15 14:36:46.064726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.581 [2024-07-15 14:36:46.064743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.581 [2024-07-15 14:36:46.064757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.581 [2024-07-15 14:36:46.064765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.581 [2024-07-15 14:36:46.064774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.581 [2024-07-15 14:36:46.064789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.581 [2024-07-15 14:36:46.072442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.581 [2024-07-15 14:36:46.072536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.581 [2024-07-15 14:36:46.072556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.581 [2024-07-15 14:36:46.072567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.581 [2024-07-15 14:36:46.072584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.581 [2024-07-15 14:36:46.072598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.581 [2024-07-15 14:36:46.072607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.581 [2024-07-15 14:36:46.072616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.581 [2024-07-15 14:36:46.072631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.581 [2024-07-15 14:36:46.074653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.581 [2024-07-15 14:36:46.074756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.581 [2024-07-15 14:36:46.074778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.581 [2024-07-15 14:36:46.074788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.581 [2024-07-15 14:36:46.074804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.581 [2024-07-15 14:36:46.074818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.581 [2024-07-15 14:36:46.074828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.581 [2024-07-15 14:36:46.074837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.581 [2024-07-15 14:36:46.074851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.581 [2024-07-15 14:36:46.082506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.581 [2024-07-15 14:36:46.082592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.581 [2024-07-15 14:36:46.082612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.581 [2024-07-15 14:36:46.082622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.581 [2024-07-15 14:36:46.082639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.581 [2024-07-15 14:36:46.082653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.581 [2024-07-15 14:36:46.082661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.581 [2024-07-15 14:36:46.082670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.581 [2024-07-15 14:36:46.082685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.581 [2024-07-15 14:36:46.084724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.581 [2024-07-15 14:36:46.084831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.581 [2024-07-15 14:36:46.084851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.581 [2024-07-15 14:36:46.084862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.581 [2024-07-15 14:36:46.084878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.581 [2024-07-15 14:36:46.084892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.581 [2024-07-15 14:36:46.084901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.581 [2024-07-15 14:36:46.084910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.581 [2024-07-15 14:36:46.084924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.581 [2024-07-15 14:36:46.092562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.581 [2024-07-15 14:36:46.092658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.581 [2024-07-15 14:36:46.092679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.581 [2024-07-15 14:36:46.092690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.581 [2024-07-15 14:36:46.092720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.581 [2024-07-15 14:36:46.092736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.581 [2024-07-15 14:36:46.092745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.581 [2024-07-15 14:36:46.092754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.581 [2024-07-15 14:36:46.092769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.581 [2024-07-15 14:36:46.094801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.581 [2024-07-15 14:36:46.094886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.581 [2024-07-15 14:36:46.094907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.581 [2024-07-15 14:36:46.094918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.581 [2024-07-15 14:36:46.094935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.581 [2024-07-15 14:36:46.094949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.581 [2024-07-15 14:36:46.094957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.581 [2024-07-15 14:36:46.094966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.581 [2024-07-15 14:36:46.094981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.581 [2024-07-15 14:36:46.102622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.581 [2024-07-15 14:36:46.102720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.581 [2024-07-15 14:36:46.102748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.581 [2024-07-15 14:36:46.102758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.581 [2024-07-15 14:36:46.102774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.581 [2024-07-15 14:36:46.102789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.581 [2024-07-15 14:36:46.102797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.581 [2024-07-15 14:36:46.102806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.581 [2024-07-15 14:36:46.102821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.581 [2024-07-15 14:36:46.104855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.581 [2024-07-15 14:36:46.104945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.581 [2024-07-15 14:36:46.104965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.581 [2024-07-15 14:36:46.104975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.581 [2024-07-15 14:36:46.104992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.581 [2024-07-15 14:36:46.105006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.581 [2024-07-15 14:36:46.105014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.581 [2024-07-15 14:36:46.105025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.581 [2024-07-15 14:36:46.105039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.581 [2024-07-15 14:36:46.112680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.581 [2024-07-15 14:36:46.112788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.581 [2024-07-15 14:36:46.112810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.581 [2024-07-15 14:36:46.112821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.581 [2024-07-15 14:36:46.112837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.581 [2024-07-15 14:36:46.112852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.581 [2024-07-15 14:36:46.112861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.581 [2024-07-15 14:36:46.112870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.581 [2024-07-15 14:36:46.112885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.581 [2024-07-15 14:36:46.114917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.581 [2024-07-15 14:36:46.115002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.581 [2024-07-15 14:36:46.115023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.581 [2024-07-15 14:36:46.115034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.581 [2024-07-15 14:36:46.115050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.581 [2024-07-15 14:36:46.115064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.581 [2024-07-15 14:36:46.115073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.581 [2024-07-15 14:36:46.115082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.582 [2024-07-15 14:36:46.115097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.582 [2024-07-15 14:36:46.122749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.582 [2024-07-15 14:36:46.122833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.582 [2024-07-15 14:36:46.122853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.582 [2024-07-15 14:36:46.122864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.582 [2024-07-15 14:36:46.122881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.582 [2024-07-15 14:36:46.122895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.582 [2024-07-15 14:36:46.122903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.582 [2024-07-15 14:36:46.122912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.582 [2024-07-15 14:36:46.122928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.582 [2024-07-15 14:36:46.124974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.582 [2024-07-15 14:36:46.125058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.582 [2024-07-15 14:36:46.125078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.582 [2024-07-15 14:36:46.125089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.582 [2024-07-15 14:36:46.125106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.582 [2024-07-15 14:36:46.125120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.582 [2024-07-15 14:36:46.125129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.582 [2024-07-15 14:36:46.125138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.582 [2024-07-15 14:36:46.125153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.582 [2024-07-15 14:36:46.132804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.582 [2024-07-15 14:36:46.132894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.582 [2024-07-15 14:36:46.132916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.582 [2024-07-15 14:36:46.132926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.582 [2024-07-15 14:36:46.132943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.582 [2024-07-15 14:36:46.132957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.582 [2024-07-15 14:36:46.132966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.582 [2024-07-15 14:36:46.132987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.582 [2024-07-15 14:36:46.133002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.582 [2024-07-15 14:36:46.135028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.582 [2024-07-15 14:36:46.135115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.582 [2024-07-15 14:36:46.135136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.582 [2024-07-15 14:36:46.135146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.582 [2024-07-15 14:36:46.135163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.582 [2024-07-15 14:36:46.135176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.582 [2024-07-15 14:36:46.135185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.582 [2024-07-15 14:36:46.135195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.582 [2024-07-15 14:36:46.135209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.582 [2024-07-15 14:36:46.142862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.582 [2024-07-15 14:36:46.142952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.582 [2024-07-15 14:36:46.142972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.582 [2024-07-15 14:36:46.142983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.582 [2024-07-15 14:36:46.143000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.582 [2024-07-15 14:36:46.143015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.582 [2024-07-15 14:36:46.143023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.582 [2024-07-15 14:36:46.143039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.582 [2024-07-15 14:36:46.143053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.582 [2024-07-15 14:36:46.145083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.582 [2024-07-15 14:36:46.145167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.582 [2024-07-15 14:36:46.145187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.582 [2024-07-15 14:36:46.145198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.582 [2024-07-15 14:36:46.145214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.582 [2024-07-15 14:36:46.145229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.582 [2024-07-15 14:36:46.145238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.582 [2024-07-15 14:36:46.145247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.582 [2024-07-15 14:36:46.145261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.582 [2024-07-15 14:36:46.152920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.582 [2024-07-15 14:36:46.153005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.582 [2024-07-15 14:36:46.153025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.582 [2024-07-15 14:36:46.153036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.582 [2024-07-15 14:36:46.153052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.582 [2024-07-15 14:36:46.153066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.582 [2024-07-15 14:36:46.153075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.582 [2024-07-15 14:36:46.153084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.582 [2024-07-15 14:36:46.153099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.582 [2024-07-15 14:36:46.155137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.582 [2024-07-15 14:36:46.155221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.582 [2024-07-15 14:36:46.155241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.582 [2024-07-15 14:36:46.155252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.582 [2024-07-15 14:36:46.155268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.582 [2024-07-15 14:36:46.155282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.582 [2024-07-15 14:36:46.155291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.582 [2024-07-15 14:36:46.155300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.582 [2024-07-15 14:36:46.155314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.582 [2024-07-15 14:36:46.162974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.582 [2024-07-15 14:36:46.163059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.582 [2024-07-15 14:36:46.163079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.582 [2024-07-15 14:36:46.163090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.582 [2024-07-15 14:36:46.163106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.582 [2024-07-15 14:36:46.163121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.583 [2024-07-15 14:36:46.163130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.583 [2024-07-15 14:36:46.163138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.583 [2024-07-15 14:36:46.163153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.583 [2024-07-15 14:36:46.165191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.583 [2024-07-15 14:36:46.165274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.583 [2024-07-15 14:36:46.165294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.583 [2024-07-15 14:36:46.165305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.583 [2024-07-15 14:36:46.165321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.583 [2024-07-15 14:36:46.165335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.583 [2024-07-15 14:36:46.165344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.583 [2024-07-15 14:36:46.165353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.583 [2024-07-15 14:36:46.165368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.844 [2024-07-15 14:36:46.173029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.844 [2024-07-15 14:36:46.173115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.844 [2024-07-15 14:36:46.173135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.844 [2024-07-15 14:36:46.173146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.844 [2024-07-15 14:36:46.173162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.844 [2024-07-15 14:36:46.173176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.844 [2024-07-15 14:36:46.173185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.844 [2024-07-15 14:36:46.173194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.844 [2024-07-15 14:36:46.173208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.844 [2024-07-15 14:36:46.175245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.844 [2024-07-15 14:36:46.175328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.844 [2024-07-15 14:36:46.175349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.844 [2024-07-15 14:36:46.175359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.844 [2024-07-15 14:36:46.175376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.844 [2024-07-15 14:36:46.175391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.844 [2024-07-15 14:36:46.175400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.844 [2024-07-15 14:36:46.175409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.844 [2024-07-15 14:36:46.175423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.844 [2024-07-15 14:36:46.183087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.844 [2024-07-15 14:36:46.183174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.844 [2024-07-15 14:36:46.183194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.844 [2024-07-15 14:36:46.183204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.844 [2024-07-15 14:36:46.183221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.844 [2024-07-15 14:36:46.183235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.844 [2024-07-15 14:36:46.183244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.844 [2024-07-15 14:36:46.183253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.844 [2024-07-15 14:36:46.183268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.844 [2024-07-15 14:36:46.185299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.844 [2024-07-15 14:36:46.185383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.844 [2024-07-15 14:36:46.185404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.844 [2024-07-15 14:36:46.185414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.844 [2024-07-15 14:36:46.185431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.844 [2024-07-15 14:36:46.185445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.844 [2024-07-15 14:36:46.185454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.844 [2024-07-15 14:36:46.185463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.844 [2024-07-15 14:36:46.185478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.844 [2024-07-15 14:36:46.193143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.844 [2024-07-15 14:36:46.193257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.844 [2024-07-15 14:36:46.193279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.845 [2024-07-15 14:36:46.193290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.845 [2024-07-15 14:36:46.193306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.845 [2024-07-15 14:36:46.193321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.845 [2024-07-15 14:36:46.193330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.845 [2024-07-15 14:36:46.193339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.845 [2024-07-15 14:36:46.193354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.845 [2024-07-15 14:36:46.195353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.845 [2024-07-15 14:36:46.195438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.845 [2024-07-15 14:36:46.195459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.845 [2024-07-15 14:36:46.195470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.845 [2024-07-15 14:36:46.195486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.845 [2024-07-15 14:36:46.195500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.845 [2024-07-15 14:36:46.195509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.845 [2024-07-15 14:36:46.195519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.845 [2024-07-15 14:36:46.195533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.845 [2024-07-15 14:36:46.203218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.845 [2024-07-15 14:36:46.203303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.845 [2024-07-15 14:36:46.203324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.845 [2024-07-15 14:36:46.203335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.845 [2024-07-15 14:36:46.203353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.845 [2024-07-15 14:36:46.203367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.845 [2024-07-15 14:36:46.203376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.845 [2024-07-15 14:36:46.203385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.845 [2024-07-15 14:36:46.203400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.845 [2024-07-15 14:36:46.205409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.845 [2024-07-15 14:36:46.205499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.845 [2024-07-15 14:36:46.205520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.845 [2024-07-15 14:36:46.205530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.845 [2024-07-15 14:36:46.205547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.845 [2024-07-15 14:36:46.205562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.845 [2024-07-15 14:36:46.205571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.845 [2024-07-15 14:36:46.205580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.845 [2024-07-15 14:36:46.205595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.845 [2024-07-15 14:36:46.213274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.845 [2024-07-15 14:36:46.213359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.845 [2024-07-15 14:36:46.213380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.845 [2024-07-15 14:36:46.213390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.845 [2024-07-15 14:36:46.213406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.845 [2024-07-15 14:36:46.213421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.845 [2024-07-15 14:36:46.213429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.845 [2024-07-15 14:36:46.213438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.845 [2024-07-15 14:36:46.213453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.845 [2024-07-15 14:36:46.215470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.845 [2024-07-15 14:36:46.215555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.845 [2024-07-15 14:36:46.215575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.845 [2024-07-15 14:36:46.215586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.845 [2024-07-15 14:36:46.215602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.845 [2024-07-15 14:36:46.215616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.845 [2024-07-15 14:36:46.215625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.845 [2024-07-15 14:36:46.215634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.845 [2024-07-15 14:36:46.215648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.845 [2024-07-15 14:36:46.223329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.845 [2024-07-15 14:36:46.223413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.845 [2024-07-15 14:36:46.223434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.845 [2024-07-15 14:36:46.223444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.845 [2024-07-15 14:36:46.223460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.845 [2024-07-15 14:36:46.223474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.845 [2024-07-15 14:36:46.223483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.845 [2024-07-15 14:36:46.223492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.845 [2024-07-15 14:36:46.223507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.845 [2024-07-15 14:36:46.225526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.845 [2024-07-15 14:36:46.225611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.845 [2024-07-15 14:36:46.225631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.845 [2024-07-15 14:36:46.225641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.845 [2024-07-15 14:36:46.225657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.845 [2024-07-15 14:36:46.225671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.845 [2024-07-15 14:36:46.225680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.845 [2024-07-15 14:36:46.225690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.845 [2024-07-15 14:36:46.225719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.845 [2024-07-15 14:36:46.233384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.845 [2024-07-15 14:36:46.233471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.845 [2024-07-15 14:36:46.233492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.845 [2024-07-15 14:36:46.233503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.845 [2024-07-15 14:36:46.233519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.845 [2024-07-15 14:36:46.233533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.845 [2024-07-15 14:36:46.233542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.845 [2024-07-15 14:36:46.233551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.845 [2024-07-15 14:36:46.233566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.845 [2024-07-15 14:36:46.235581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.845 [2024-07-15 14:36:46.235667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.845 [2024-07-15 14:36:46.235688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.845 [2024-07-15 14:36:46.235709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.845 [2024-07-15 14:36:46.235728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.845 [2024-07-15 14:36:46.235743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.845 [2024-07-15 14:36:46.235752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.845 [2024-07-15 14:36:46.235761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.845 [2024-07-15 14:36:46.235776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.845 [2024-07-15 14:36:46.243439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.845 [2024-07-15 14:36:46.243526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.845 [2024-07-15 14:36:46.243547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.845 [2024-07-15 14:36:46.243557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.845 [2024-07-15 14:36:46.243574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.845 [2024-07-15 14:36:46.243588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.845 [2024-07-15 14:36:46.243596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.845 [2024-07-15 14:36:46.243606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.845 [2024-07-15 14:36:46.243620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.845 [2024-07-15 14:36:46.245635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.846 [2024-07-15 14:36:46.245731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.846 [2024-07-15 14:36:46.245752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.846 [2024-07-15 14:36:46.245764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.846 [2024-07-15 14:36:46.245780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.846 [2024-07-15 14:36:46.245794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.846 [2024-07-15 14:36:46.245803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.846 [2024-07-15 14:36:46.245812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.846 [2024-07-15 14:36:46.245827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.846 [2024-07-15 14:36:46.253497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.846 [2024-07-15 14:36:46.253588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.846 [2024-07-15 14:36:46.253609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.846 [2024-07-15 14:36:46.253620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.846 [2024-07-15 14:36:46.253636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.846 [2024-07-15 14:36:46.253651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.846 [2024-07-15 14:36:46.253659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.846 [2024-07-15 14:36:46.253668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.846 [2024-07-15 14:36:46.253683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.846 [2024-07-15 14:36:46.255690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.846 [2024-07-15 14:36:46.255786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.846 [2024-07-15 14:36:46.255807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.846 [2024-07-15 14:36:46.255818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.846 [2024-07-15 14:36:46.255835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.846 [2024-07-15 14:36:46.255858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.846 [2024-07-15 14:36:46.255869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.846 [2024-07-15 14:36:46.255878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.846 [2024-07-15 14:36:46.255893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.846 [2024-07-15 14:36:46.263555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.846 [2024-07-15 14:36:46.263641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.846 [2024-07-15 14:36:46.263662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.846 [2024-07-15 14:36:46.263672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.846 [2024-07-15 14:36:46.263689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.846 [2024-07-15 14:36:46.263716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.846 [2024-07-15 14:36:46.263727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.846 [2024-07-15 14:36:46.263736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.846 [2024-07-15 14:36:46.263750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.846 [2024-07-15 14:36:46.265755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.846 [2024-07-15 14:36:46.265840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.846 [2024-07-15 14:36:46.265860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.846 [2024-07-15 14:36:46.265870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.846 [2024-07-15 14:36:46.265887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.846 [2024-07-15 14:36:46.265902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.846 [2024-07-15 14:36:46.265911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.846 [2024-07-15 14:36:46.265920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.846 [2024-07-15 14:36:46.265934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.846 [2024-07-15 14:36:46.273610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.846 [2024-07-15 14:36:46.273707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.846 [2024-07-15 14:36:46.273730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.846 [2024-07-15 14:36:46.273741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.846 [2024-07-15 14:36:46.273758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.846 [2024-07-15 14:36:46.273772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.846 [2024-07-15 14:36:46.273781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.846 [2024-07-15 14:36:46.273790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.846 [2024-07-15 14:36:46.273805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.846 [2024-07-15 14:36:46.275810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.846 [2024-07-15 14:36:46.275894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.846 [2024-07-15 14:36:46.275915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.846 [2024-07-15 14:36:46.275925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.846 [2024-07-15 14:36:46.275951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.846 [2024-07-15 14:36:46.275967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.846 [2024-07-15 14:36:46.275976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.846 [2024-07-15 14:36:46.275985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.846 [2024-07-15 14:36:46.275999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.846 [2024-07-15 14:36:46.283665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.846 [2024-07-15 14:36:46.283759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.846 [2024-07-15 14:36:46.283780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.846 [2024-07-15 14:36:46.283792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.846 [2024-07-15 14:36:46.283809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.846 [2024-07-15 14:36:46.283823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.846 [2024-07-15 14:36:46.283832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.846 [2024-07-15 14:36:46.283841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.846 [2024-07-15 14:36:46.283855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.846 [2024-07-15 14:36:46.285863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.846 [2024-07-15 14:36:46.285963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.846 [2024-07-15 14:36:46.285983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.846 [2024-07-15 14:36:46.285993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.846 [2024-07-15 14:36:46.286009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.846 [2024-07-15 14:36:46.286024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.846 [2024-07-15 14:36:46.286033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.846 [2024-07-15 14:36:46.286042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.846 [2024-07-15 14:36:46.286056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.846 [2024-07-15 14:36:46.293734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.846 [2024-07-15 14:36:46.293843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.846 [2024-07-15 14:36:46.293864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.846 [2024-07-15 14:36:46.293874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.846 [2024-07-15 14:36:46.293891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.846 [2024-07-15 14:36:46.293905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.846 [2024-07-15 14:36:46.293914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.846 [2024-07-15 14:36:46.293923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.846 [2024-07-15 14:36:46.293937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.846 [2024-07-15 14:36:46.295933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.846 [2024-07-15 14:36:46.296043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.846 [2024-07-15 14:36:46.296065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.846 [2024-07-15 14:36:46.296075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.846 [2024-07-15 14:36:46.296092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.846 [2024-07-15 14:36:46.296106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.846 [2024-07-15 14:36:46.296115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.846 [2024-07-15 14:36:46.296124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.846 [2024-07-15 14:36:46.296138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.846 [2024-07-15 14:36:46.303814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.846 [2024-07-15 14:36:46.303947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.847 [2024-07-15 14:36:46.303969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.847 [2024-07-15 14:36:46.303980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.847 [2024-07-15 14:36:46.304007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.847 [2024-07-15 14:36:46.304023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.847 [2024-07-15 14:36:46.304032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.847 [2024-07-15 14:36:46.304041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.847 [2024-07-15 14:36:46.304056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.847 [2024-07-15 14:36:46.305991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.847 [2024-07-15 14:36:46.306130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.847 [2024-07-15 14:36:46.306151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.847 [2024-07-15 14:36:46.306162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.847 [2024-07-15 14:36:46.306178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.847 [2024-07-15 14:36:46.306193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.847 [2024-07-15 14:36:46.306202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.847 [2024-07-15 14:36:46.306211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.847 [2024-07-15 14:36:46.306225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.847 [2024-07-15 14:36:46.313907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.847 [2024-07-15 14:36:46.314022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.847 [2024-07-15 14:36:46.314043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.847 [2024-07-15 14:36:46.314054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.847 [2024-07-15 14:36:46.314070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.847 [2024-07-15 14:36:46.314084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.847 [2024-07-15 14:36:46.314093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.847 [2024-07-15 14:36:46.314102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.847 [2024-07-15 14:36:46.314116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.847 [2024-07-15 14:36:46.316076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.847 [2024-07-15 14:36:46.316189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.847 [2024-07-15 14:36:46.316209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.847 [2024-07-15 14:36:46.316220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.847 [2024-07-15 14:36:46.316236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.847 [2024-07-15 14:36:46.316250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.847 [2024-07-15 14:36:46.316259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.847 [2024-07-15 14:36:46.316268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.847 [2024-07-15 14:36:46.316282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.847 [2024-07-15 14:36:46.323990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.847 [2024-07-15 14:36:46.324103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.847 [2024-07-15 14:36:46.324123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.847 [2024-07-15 14:36:46.324134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.847 [2024-07-15 14:36:46.324150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.847 [2024-07-15 14:36:46.324164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.847 [2024-07-15 14:36:46.324173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.847 [2024-07-15 14:36:46.324182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.847 [2024-07-15 14:36:46.324196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.847 [2024-07-15 14:36:46.326161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.847 [2024-07-15 14:36:46.326276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.847 [2024-07-15 14:36:46.326296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.847 [2024-07-15 14:36:46.326307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.847 [2024-07-15 14:36:46.326335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.847 [2024-07-15 14:36:46.326351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.847 [2024-07-15 14:36:46.326361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.847 [2024-07-15 14:36:46.326369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.847 [2024-07-15 14:36:46.326384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.847 [2024-07-15 14:36:46.334060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.847 [2024-07-15 14:36:46.334176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.847 [2024-07-15 14:36:46.334197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.847 [2024-07-15 14:36:46.334208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.847 [2024-07-15 14:36:46.334224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.847 [2024-07-15 14:36:46.334238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.847 [2024-07-15 14:36:46.334247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.847 [2024-07-15 14:36:46.334256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.847 [2024-07-15 14:36:46.334271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.847 [2024-07-15 14:36:46.336242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.847 [2024-07-15 14:36:46.336357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.847 [2024-07-15 14:36:46.336377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.847 [2024-07-15 14:36:46.336388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.847 [2024-07-15 14:36:46.336404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.847 [2024-07-15 14:36:46.336418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.847 [2024-07-15 14:36:46.336427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.847 [2024-07-15 14:36:46.336436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.847 [2024-07-15 14:36:46.336451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.847 [2024-07-15 14:36:46.344146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.847 [2024-07-15 14:36:46.344232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.847 [2024-07-15 14:36:46.344252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.847 [2024-07-15 14:36:46.344263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.847 [2024-07-15 14:36:46.344279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.847 [2024-07-15 14:36:46.344293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.847 [2024-07-15 14:36:46.344302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.847 [2024-07-15 14:36:46.344310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.847 [2024-07-15 14:36:46.344325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.847 [2024-07-15 14:36:46.346311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.847 [2024-07-15 14:36:46.346421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.847 [2024-07-15 14:36:46.346441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.847 [2024-07-15 14:36:46.346452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.847 [2024-07-15 14:36:46.346468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.847 [2024-07-15 14:36:46.346482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.847 [2024-07-15 14:36:46.346491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.847 [2024-07-15 14:36:46.346500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.847 [2024-07-15 14:36:46.346514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.847 [2024-07-15 14:36:46.354203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.847 [2024-07-15 14:36:46.354288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.847 [2024-07-15 14:36:46.354308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.847 [2024-07-15 14:36:46.354328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.847 [2024-07-15 14:36:46.354345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.847 [2024-07-15 14:36:46.354359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.847 [2024-07-15 14:36:46.354368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.847 [2024-07-15 14:36:46.354377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.847 [2024-07-15 14:36:46.354392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.847 [2024-07-15 14:36:46.356389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.847 [2024-07-15 14:36:46.356503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.848 [2024-07-15 14:36:46.356523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.848 [2024-07-15 14:36:46.356534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.848 [2024-07-15 14:36:46.356550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.848 [2024-07-15 14:36:46.356564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.848 [2024-07-15 14:36:46.356573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.848 [2024-07-15 14:36:46.356582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.848 [2024-07-15 14:36:46.356596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.848 [2024-07-15 14:36:46.364257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.848 [2024-07-15 14:36:46.364344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.848 [2024-07-15 14:36:46.364364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.848 [2024-07-15 14:36:46.364375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.848 [2024-07-15 14:36:46.364392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.848 [2024-07-15 14:36:46.364406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.848 [2024-07-15 14:36:46.364415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.848 [2024-07-15 14:36:46.364424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.848 [2024-07-15 14:36:46.364439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.848 [2024-07-15 14:36:46.366458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.848 [2024-07-15 14:36:46.366542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.848 [2024-07-15 14:36:46.366563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.848 [2024-07-15 14:36:46.366574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.848 [2024-07-15 14:36:46.366590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.848 [2024-07-15 14:36:46.366604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.848 [2024-07-15 14:36:46.366613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.848 [2024-07-15 14:36:46.366621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.848 [2024-07-15 14:36:46.366636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.848 [2024-07-15 14:36:46.374313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.848 [2024-07-15 14:36:46.374405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.848 [2024-07-15 14:36:46.374426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.848 [2024-07-15 14:36:46.374436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.848 [2024-07-15 14:36:46.374452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.848 [2024-07-15 14:36:46.374467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.848 [2024-07-15 14:36:46.374475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.848 [2024-07-15 14:36:46.374484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.848 [2024-07-15 14:36:46.374499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.848 [2024-07-15 14:36:46.376512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.848 [2024-07-15 14:36:46.376595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.848 [2024-07-15 14:36:46.376615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.848 [2024-07-15 14:36:46.376626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.848 [2024-07-15 14:36:46.376642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.848 [2024-07-15 14:36:46.376656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.848 [2024-07-15 14:36:46.376665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.848 [2024-07-15 14:36:46.376674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.848 [2024-07-15 14:36:46.376688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.848 [2024-07-15 14:36:46.384375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.848 [2024-07-15 14:36:46.384461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.848 [2024-07-15 14:36:46.384482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.848 [2024-07-15 14:36:46.384492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.848 [2024-07-15 14:36:46.384509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.848 [2024-07-15 14:36:46.384523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.848 [2024-07-15 14:36:46.384531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.848 [2024-07-15 14:36:46.384540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.848 [2024-07-15 14:36:46.384555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.848 [2024-07-15 14:36:46.386566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.848 [2024-07-15 14:36:46.386650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.848 [2024-07-15 14:36:46.386670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.848 [2024-07-15 14:36:46.386681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.848 [2024-07-15 14:36:46.386708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.848 [2024-07-15 14:36:46.386725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.848 [2024-07-15 14:36:46.386734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.848 [2024-07-15 14:36:46.386743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.848 [2024-07-15 14:36:46.386758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.848 [2024-07-15 14:36:46.394429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.848 [2024-07-15 14:36:46.394513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.848 [2024-07-15 14:36:46.394534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.848 [2024-07-15 14:36:46.394545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.848 [2024-07-15 14:36:46.394561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.848 [2024-07-15 14:36:46.394575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.848 [2024-07-15 14:36:46.394583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.848 [2024-07-15 14:36:46.394592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.848 [2024-07-15 14:36:46.394607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.848 [2024-07-15 14:36:46.396619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.848 [2024-07-15 14:36:46.396713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.848 [2024-07-15 14:36:46.396735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.848 [2024-07-15 14:36:46.396745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.848 [2024-07-15 14:36:46.396761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.848 [2024-07-15 14:36:46.396776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.848 [2024-07-15 14:36:46.396784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.849 [2024-07-15 14:36:46.396794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.849 [2024-07-15 14:36:46.396814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.849 [2024-07-15 14:36:46.404485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.849 [2024-07-15 14:36:46.404581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.849 [2024-07-15 14:36:46.404602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.849 [2024-07-15 14:36:46.404613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.849 [2024-07-15 14:36:46.404630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.849 [2024-07-15 14:36:46.404644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.849 [2024-07-15 14:36:46.404653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.849 [2024-07-15 14:36:46.404662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.849 [2024-07-15 14:36:46.404683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.849 [2024-07-15 14:36:46.406675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.849 [2024-07-15 14:36:46.406780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.849 [2024-07-15 14:36:46.406801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.849 [2024-07-15 14:36:46.406812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.849 [2024-07-15 14:36:46.406828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.849 [2024-07-15 14:36:46.406842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.849 [2024-07-15 14:36:46.406851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.849 [2024-07-15 14:36:46.406860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.849 [2024-07-15 14:36:46.406878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.849 [2024-07-15 14:36:46.414546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.849 [2024-07-15 14:36:46.414639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.849 [2024-07-15 14:36:46.414660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.849 [2024-07-15 14:36:46.414671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.849 [2024-07-15 14:36:46.414687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.849 [2024-07-15 14:36:46.414716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.849 [2024-07-15 14:36:46.414732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.849 [2024-07-15 14:36:46.414746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.849 [2024-07-15 14:36:46.414765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.849 [2024-07-15 14:36:46.416737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.849 [2024-07-15 14:36:46.416831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.849 [2024-07-15 14:36:46.416851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.849 [2024-07-15 14:36:46.416862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.849 [2024-07-15 14:36:46.416878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.849 [2024-07-15 14:36:46.416892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.849 [2024-07-15 14:36:46.416901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.849 [2024-07-15 14:36:46.416910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.849 [2024-07-15 14:36:46.416925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.849 [2024-07-15 14:36:46.424604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.849 [2024-07-15 14:36:46.424689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.849 [2024-07-15 14:36:46.424721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.849 [2024-07-15 14:36:46.424732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.849 [2024-07-15 14:36:46.424749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.849 [2024-07-15 14:36:46.424763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.849 [2024-07-15 14:36:46.424771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.849 [2024-07-15 14:36:46.424781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.849 [2024-07-15 14:36:46.424795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.849 [2024-07-15 14:36:46.426802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.849 [2024-07-15 14:36:46.426897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.849 [2024-07-15 14:36:46.426917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:06.849 [2024-07-15 14:36:46.426928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:06.849 [2024-07-15 14:36:46.426943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:06.849 [2024-07-15 14:36:46.426958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:06.849 [2024-07-15 14:36:46.426967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:06.849 [2024-07-15 14:36:46.426976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:06.849 [2024-07-15 14:36:46.426990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.849 [2024-07-15 14:36:46.434660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:06.849 [2024-07-15 14:36:46.434758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.849 [2024-07-15 14:36:46.434780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:06.849 [2024-07-15 14:36:46.434793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:06.849 [2024-07-15 14:36:46.434809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:06.849 [2024-07-15 14:36:46.434823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:06.849 [2024-07-15 14:36:46.434832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:06.849 [2024-07-15 14:36:46.434841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:06.849 [2024-07-15 14:36:46.434856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.849 [2024-07-15 14:36:46.436856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:06.849 [2024-07-15 14:36:46.436943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.111 [2024-07-15 14:36:46.436963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.111 [2024-07-15 14:36:46.436974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.111 [2024-07-15 14:36:46.436990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.111 [2024-07-15 14:36:46.437005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.111 [2024-07-15 14:36:46.437014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.111 [2024-07-15 14:36:46.437022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.111 [2024-07-15 14:36:46.437037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.111 [2024-07-15 14:36:46.444732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.111 [2024-07-15 14:36:46.444818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.111 [2024-07-15 14:36:46.444838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.111 [2024-07-15 14:36:46.444849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.111 [2024-07-15 14:36:46.444866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.111 [2024-07-15 14:36:46.444880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.111 [2024-07-15 14:36:46.444889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.111 [2024-07-15 14:36:46.444898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.111 [2024-07-15 14:36:46.444913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.111 [2024-07-15 14:36:46.446912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.111 [2024-07-15 14:36:46.446997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.111 [2024-07-15 14:36:46.447018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.111 [2024-07-15 14:36:46.447028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.111 [2024-07-15 14:36:46.447045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.111 [2024-07-15 14:36:46.447059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.111 [2024-07-15 14:36:46.447068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.111 [2024-07-15 14:36:46.447077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.111 [2024-07-15 14:36:46.447091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.111 [2024-07-15 14:36:46.454786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.111 [2024-07-15 14:36:46.454889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.111 [2024-07-15 14:36:46.454914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.111 [2024-07-15 14:36:46.454924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.111 [2024-07-15 14:36:46.454940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.111 [2024-07-15 14:36:46.454954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.111 [2024-07-15 14:36:46.454964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.111 [2024-07-15 14:36:46.454972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.111 [2024-07-15 14:36:46.454987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.111 [2024-07-15 14:36:46.456966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.111 [2024-07-15 14:36:46.457049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.111 [2024-07-15 14:36:46.457070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.111 [2024-07-15 14:36:46.457080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.111 [2024-07-15 14:36:46.457096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.111 [2024-07-15 14:36:46.457110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.111 [2024-07-15 14:36:46.457119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.111 [2024-07-15 14:36:46.457128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.111 [2024-07-15 14:36:46.457143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.111 [2024-07-15 14:36:46.464858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.111 [2024-07-15 14:36:46.464959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.111 [2024-07-15 14:36:46.464979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.111 [2024-07-15 14:36:46.464990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.111 [2024-07-15 14:36:46.465006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.111 [2024-07-15 14:36:46.465020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.111 [2024-07-15 14:36:46.465029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.111 [2024-07-15 14:36:46.465037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.111 [2024-07-15 14:36:46.465052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.111 [2024-07-15 14:36:46.467019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.111 [2024-07-15 14:36:46.467119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.111 [2024-07-15 14:36:46.467140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.111 [2024-07-15 14:36:46.467150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.111 [2024-07-15 14:36:46.467166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.111 [2024-07-15 14:36:46.467180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.111 [2024-07-15 14:36:46.467189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.111 [2024-07-15 14:36:46.467198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.111 [2024-07-15 14:36:46.467213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.111 [2024-07-15 14:36:46.474928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.111 [2024-07-15 14:36:46.475042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.111 [2024-07-15 14:36:46.475063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.111 [2024-07-15 14:36:46.475074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.111 [2024-07-15 14:36:46.475090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.111 [2024-07-15 14:36:46.475104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.111 [2024-07-15 14:36:46.475112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.111 [2024-07-15 14:36:46.475121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.111 [2024-07-15 14:36:46.475136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.111 [2024-07-15 14:36:46.477099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.111 [2024-07-15 14:36:46.477214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.111 [2024-07-15 14:36:46.477234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.111 [2024-07-15 14:36:46.477245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.111 [2024-07-15 14:36:46.477268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.111 [2024-07-15 14:36:46.477282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.112 [2024-07-15 14:36:46.477291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.112 [2024-07-15 14:36:46.477300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.112 [2024-07-15 14:36:46.477314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.112 [2024-07-15 14:36:46.484997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.112 [2024-07-15 14:36:46.485133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.112 [2024-07-15 14:36:46.485153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.112 [2024-07-15 14:36:46.485164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.112 [2024-07-15 14:36:46.485179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.112 [2024-07-15 14:36:46.485194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.112 [2024-07-15 14:36:46.485202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.112 [2024-07-15 14:36:46.485211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.112 [2024-07-15 14:36:46.485232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.112 [2024-07-15 14:36:46.487166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.112 [2024-07-15 14:36:46.487281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.112 [2024-07-15 14:36:46.487302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.112 [2024-07-15 14:36:46.487313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.112 [2024-07-15 14:36:46.487329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.112 [2024-07-15 14:36:46.487343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.112 [2024-07-15 14:36:46.487351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.112 [2024-07-15 14:36:46.487360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.112 [2024-07-15 14:36:46.487375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.112 [2024-07-15 14:36:46.495087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.112 [2024-07-15 14:36:46.495172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.112 [2024-07-15 14:36:46.495192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.112 [2024-07-15 14:36:46.495203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.112 [2024-07-15 14:36:46.495219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.112 [2024-07-15 14:36:46.495234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.112 [2024-07-15 14:36:46.495243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.112 [2024-07-15 14:36:46.495252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.112 [2024-07-15 14:36:46.495266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.112 [2024-07-15 14:36:46.497235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.112 [2024-07-15 14:36:46.497347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.112 [2024-07-15 14:36:46.497367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.112 [2024-07-15 14:36:46.497378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.112 [2024-07-15 14:36:46.497394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.112 [2024-07-15 14:36:46.497408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.112 [2024-07-15 14:36:46.497417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.112 [2024-07-15 14:36:46.497426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.112 [2024-07-15 14:36:46.497440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.112 [2024-07-15 14:36:46.505141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.112 [2024-07-15 14:36:46.505265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.112 [2024-07-15 14:36:46.505286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.112 [2024-07-15 14:36:46.505296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.112 [2024-07-15 14:36:46.505312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.112 [2024-07-15 14:36:46.505326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.112 [2024-07-15 14:36:46.505334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.112 [2024-07-15 14:36:46.505343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.112 [2024-07-15 14:36:46.505357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.112 [2024-07-15 14:36:46.507302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.112 [2024-07-15 14:36:46.507385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.112 [2024-07-15 14:36:46.507406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.112 [2024-07-15 14:36:46.507417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.112 [2024-07-15 14:36:46.507434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.112 [2024-07-15 14:36:46.507448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.112 [2024-07-15 14:36:46.507457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.112 [2024-07-15 14:36:46.507466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.112 [2024-07-15 14:36:46.507480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.112 [2024-07-15 14:36:46.515214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.112 [2024-07-15 14:36:46.515308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.112 [2024-07-15 14:36:46.515329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.112 [2024-07-15 14:36:46.515341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.112 [2024-07-15 14:36:46.515357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.112 [2024-07-15 14:36:46.515372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.112 [2024-07-15 14:36:46.515381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.112 [2024-07-15 14:36:46.515390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.112 [2024-07-15 14:36:46.515405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.112 [2024-07-15 14:36:46.517354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.112 [2024-07-15 14:36:46.517443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.112 [2024-07-15 14:36:46.517463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.112 [2024-07-15 14:36:46.517474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.112 [2024-07-15 14:36:46.517490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.112 [2024-07-15 14:36:46.517504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.112 [2024-07-15 14:36:46.517513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.112 [2024-07-15 14:36:46.517522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.112 [2024-07-15 14:36:46.517536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.112 [2024-07-15 14:36:46.525271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.112 [2024-07-15 14:36:46.525386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.112 [2024-07-15 14:36:46.525406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.112 [2024-07-15 14:36:46.525417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.112 [2024-07-15 14:36:46.525433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.112 [2024-07-15 14:36:46.525450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.112 [2024-07-15 14:36:46.525459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.112 [2024-07-15 14:36:46.525468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.112 [2024-07-15 14:36:46.525482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.112 [2024-07-15 14:36:46.527412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.112 [2024-07-15 14:36:46.527514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.112 [2024-07-15 14:36:46.527534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.112 [2024-07-15 14:36:46.527545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.112 [2024-07-15 14:36:46.527561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.112 [2024-07-15 14:36:46.527576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.112 [2024-07-15 14:36:46.527585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.112 [2024-07-15 14:36:46.527594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.112 [2024-07-15 14:36:46.527608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.112 [2024-07-15 14:36:46.535342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.112 [2024-07-15 14:36:46.535431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.112 [2024-07-15 14:36:46.535452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.112 [2024-07-15 14:36:46.535463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.112 [2024-07-15 14:36:46.535479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.113 [2024-07-15 14:36:46.535494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.113 [2024-07-15 14:36:46.535503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.113 [2024-07-15 14:36:46.535512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.113 [2024-07-15 14:36:46.535526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.113 [2024-07-15 14:36:46.537482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.113 [2024-07-15 14:36:46.537567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.113 [2024-07-15 14:36:46.537588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.113 [2024-07-15 14:36:46.537599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.113 [2024-07-15 14:36:46.537615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.113 [2024-07-15 14:36:46.537629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.113 [2024-07-15 14:36:46.537638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.113 [2024-07-15 14:36:46.537647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.113 [2024-07-15 14:36:46.537662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.113 [2024-07-15 14:36:46.545399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.113 [2024-07-15 14:36:46.545485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.113 [2024-07-15 14:36:46.545506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.113 [2024-07-15 14:36:46.545517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.113 [2024-07-15 14:36:46.545533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.113 [2024-07-15 14:36:46.545547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.113 [2024-07-15 14:36:46.545556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.113 [2024-07-15 14:36:46.545565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.113 [2024-07-15 14:36:46.545580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.113 [2024-07-15 14:36:46.547536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.113 [2024-07-15 14:36:46.547621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.113 [2024-07-15 14:36:46.547642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.113 [2024-07-15 14:36:46.547653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.113 [2024-07-15 14:36:46.547669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.113 [2024-07-15 14:36:46.547683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.113 [2024-07-15 14:36:46.547692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.113 [2024-07-15 14:36:46.547714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.113 [2024-07-15 14:36:46.547731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.113 [2024-07-15 14:36:46.555456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.113 [2024-07-15 14:36:46.555544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.113 [2024-07-15 14:36:46.555564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.113 [2024-07-15 14:36:46.555575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.113 [2024-07-15 14:36:46.555591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.113 [2024-07-15 14:36:46.555606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.113 [2024-07-15 14:36:46.555614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.113 [2024-07-15 14:36:46.555623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.113 [2024-07-15 14:36:46.555638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.113 [2024-07-15 14:36:46.557591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.113 [2024-07-15 14:36:46.557690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.113 [2024-07-15 14:36:46.557724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.113 [2024-07-15 14:36:46.557736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.113 [2024-07-15 14:36:46.557753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.113 [2024-07-15 14:36:46.557767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.113 [2024-07-15 14:36:46.557776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.113 [2024-07-15 14:36:46.557785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.113 [2024-07-15 14:36:46.557800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.113 [2024-07-15 14:36:46.565513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.113 [2024-07-15 14:36:46.565614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.113 [2024-07-15 14:36:46.565634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.113 [2024-07-15 14:36:46.565645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.113 [2024-07-15 14:36:46.565661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.113 [2024-07-15 14:36:46.565678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.113 [2024-07-15 14:36:46.565687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.113 [2024-07-15 14:36:46.565709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.113 [2024-07-15 14:36:46.565727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.113 [2024-07-15 14:36:46.567660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.113 [2024-07-15 14:36:46.567767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.113 [2024-07-15 14:36:46.567789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.113 [2024-07-15 14:36:46.567800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.113 [2024-07-15 14:36:46.567816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.113 [2024-07-15 14:36:46.567830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.113 [2024-07-15 14:36:46.567840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.113 [2024-07-15 14:36:46.567849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.113 [2024-07-15 14:36:46.567864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.113 [2024-07-15 14:36:46.575583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.113 [2024-07-15 14:36:46.575684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.113 [2024-07-15 14:36:46.575715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.113 [2024-07-15 14:36:46.575727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.113 [2024-07-15 14:36:46.575744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.113 [2024-07-15 14:36:46.575758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.113 [2024-07-15 14:36:46.575767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.113 [2024-07-15 14:36:46.575776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.113 [2024-07-15 14:36:46.575791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.113 [2024-07-15 14:36:46.577736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.113 [2024-07-15 14:36:46.577865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.113 [2024-07-15 14:36:46.577886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.113 [2024-07-15 14:36:46.577896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.113 [2024-07-15 14:36:46.577912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.113 [2024-07-15 14:36:46.577927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.113 [2024-07-15 14:36:46.577936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.113 [2024-07-15 14:36:46.577945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.113 [2024-07-15 14:36:46.577959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.113 [2024-07-15 14:36:46.585653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.113 [2024-07-15 14:36:46.585778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.113 [2024-07-15 14:36:46.585799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.113 [2024-07-15 14:36:46.585811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.113 [2024-07-15 14:36:46.585827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.113 [2024-07-15 14:36:46.585841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.113 [2024-07-15 14:36:46.585850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.113 [2024-07-15 14:36:46.585859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.113 [2024-07-15 14:36:46.585873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.113 [2024-07-15 14:36:46.587830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.113 [2024-07-15 14:36:46.587931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.113 [2024-07-15 14:36:46.587952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.113 [2024-07-15 14:36:46.587962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.114 [2024-07-15 14:36:46.587979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.114 [2024-07-15 14:36:46.587993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.114 [2024-07-15 14:36:46.588002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.114 [2024-07-15 14:36:46.588011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.114 [2024-07-15 14:36:46.588036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.114 [2024-07-15 14:36:46.595754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.114 [2024-07-15 14:36:46.595876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.114 [2024-07-15 14:36:46.595897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.114 [2024-07-15 14:36:46.595908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.114 [2024-07-15 14:36:46.595924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.114 [2024-07-15 14:36:46.595939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.114 [2024-07-15 14:36:46.595947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.114 [2024-07-15 14:36:46.595956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.114 [2024-07-15 14:36:46.595971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.114 [2024-07-15 14:36:46.597900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.114 [2024-07-15 14:36:46.598015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.114 [2024-07-15 14:36:46.598035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.114 [2024-07-15 14:36:46.598046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.114 [2024-07-15 14:36:46.598062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.114 [2024-07-15 14:36:46.598076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.114 [2024-07-15 14:36:46.598086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.114 [2024-07-15 14:36:46.598094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.114 [2024-07-15 14:36:46.598109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.114 [2024-07-15 14:36:46.605830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.114 [2024-07-15 14:36:46.605945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.114 [2024-07-15 14:36:46.605966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.114 [2024-07-15 14:36:46.605977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.114 [2024-07-15 14:36:46.605993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.114 [2024-07-15 14:36:46.606008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.114 [2024-07-15 14:36:46.606016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.114 [2024-07-15 14:36:46.606025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.114 [2024-07-15 14:36:46.606040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.114 [2024-07-15 14:36:46.607968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.114 [2024-07-15 14:36:46.608094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.114 [2024-07-15 14:36:46.608116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.114 [2024-07-15 14:36:46.608127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.114 [2024-07-15 14:36:46.608153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.114 [2024-07-15 14:36:46.608169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.114 [2024-07-15 14:36:46.608178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.114 [2024-07-15 14:36:46.608191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.114 [2024-07-15 14:36:46.608214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.114 [2024-07-15 14:36:46.615899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.114 [2024-07-15 14:36:46.616014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.114 [2024-07-15 14:36:46.616035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.114 [2024-07-15 14:36:46.616045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.114 [2024-07-15 14:36:46.616062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.114 [2024-07-15 14:36:46.616076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.114 [2024-07-15 14:36:46.616085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.114 [2024-07-15 14:36:46.616094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.114 [2024-07-15 14:36:46.616119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.114 [2024-07-15 14:36:46.618040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.114 [2024-07-15 14:36:46.618139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.114 [2024-07-15 14:36:46.618158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.114 [2024-07-15 14:36:46.618169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.114 [2024-07-15 14:36:46.618185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.114 [2024-07-15 14:36:46.618199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.114 [2024-07-15 14:36:46.618208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.114 [2024-07-15 14:36:46.618217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.114 [2024-07-15 14:36:46.618231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.114 [2024-07-15 14:36:46.625968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.114 [2024-07-15 14:36:46.626083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.114 [2024-07-15 14:36:46.626104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.114 [2024-07-15 14:36:46.626115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.114 [2024-07-15 14:36:46.626131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.114 [2024-07-15 14:36:46.626145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.114 [2024-07-15 14:36:46.626154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.114 [2024-07-15 14:36:46.626162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.114 [2024-07-15 14:36:46.626177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.114 [2024-07-15 14:36:46.628126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.114 [2024-07-15 14:36:46.628227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.114 [2024-07-15 14:36:46.628247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.114 [2024-07-15 14:36:46.628258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.114 [2024-07-15 14:36:46.628274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.114 [2024-07-15 14:36:46.628289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.114 [2024-07-15 14:36:46.628298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.114 [2024-07-15 14:36:46.628307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.114 [2024-07-15 14:36:46.628321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.114 [2024-07-15 14:36:46.636036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.114 [2024-07-15 14:36:46.636151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.114 [2024-07-15 14:36:46.636171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.114 [2024-07-15 14:36:46.636182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.114 [2024-07-15 14:36:46.636209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.114 [2024-07-15 14:36:46.636225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.114 [2024-07-15 14:36:46.636233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.114 [2024-07-15 14:36:46.636242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.114 [2024-07-15 14:36:46.636257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.114 [2024-07-15 14:36:46.638195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.114 [2024-07-15 14:36:46.638307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.114 [2024-07-15 14:36:46.638343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.114 [2024-07-15 14:36:46.638355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.114 [2024-07-15 14:36:46.638372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.114 [2024-07-15 14:36:46.638386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.114 [2024-07-15 14:36:46.638395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.114 [2024-07-15 14:36:46.638404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.114 [2024-07-15 14:36:46.638418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.114 [2024-07-15 14:36:46.646120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.114 [2024-07-15 14:36:46.646237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.114 [2024-07-15 14:36:46.646257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.114 [2024-07-15 14:36:46.646268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.115 [2024-07-15 14:36:46.646284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.115 [2024-07-15 14:36:46.646298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.115 [2024-07-15 14:36:46.646307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.115 [2024-07-15 14:36:46.646316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.115 [2024-07-15 14:36:46.646344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.115 [2024-07-15 14:36:46.648261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.115 [2024-07-15 14:36:46.648374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.115 [2024-07-15 14:36:46.648394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.115 [2024-07-15 14:36:46.648404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.115 [2024-07-15 14:36:46.648421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.115 [2024-07-15 14:36:46.648435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.115 [2024-07-15 14:36:46.648444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.115 [2024-07-15 14:36:46.648453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.115 [2024-07-15 14:36:46.648467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.115 [2024-07-15 14:36:46.656189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.115 [2024-07-15 14:36:46.656303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.115 [2024-07-15 14:36:46.656324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.115 [2024-07-15 14:36:46.656334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.115 [2024-07-15 14:36:46.656350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.115 [2024-07-15 14:36:46.656365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.115 [2024-07-15 14:36:46.656373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.115 [2024-07-15 14:36:46.656382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.115 [2024-07-15 14:36:46.656396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.115 [2024-07-15 14:36:46.658361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.115 [2024-07-15 14:36:46.658445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.115 [2024-07-15 14:36:46.658466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.115 [2024-07-15 14:36:46.658476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.115 [2024-07-15 14:36:46.658493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.115 [2024-07-15 14:36:46.658507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.115 [2024-07-15 14:36:46.658516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.115 [2024-07-15 14:36:46.658525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.115 [2024-07-15 14:36:46.658539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.115 [2024-07-15 14:36:46.666257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.115 [2024-07-15 14:36:46.666380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.115 [2024-07-15 14:36:46.666401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.115 [2024-07-15 14:36:46.666412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.115 [2024-07-15 14:36:46.666428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.115 [2024-07-15 14:36:46.666442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.115 [2024-07-15 14:36:46.666451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.115 [2024-07-15 14:36:46.666460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.115 [2024-07-15 14:36:46.666475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.115 [2024-07-15 14:36:46.668413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.115 [2024-07-15 14:36:46.668526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.115 [2024-07-15 14:36:46.668546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.115 [2024-07-15 14:36:46.668556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.115 [2024-07-15 14:36:46.668572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.115 [2024-07-15 14:36:46.668586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.115 [2024-07-15 14:36:46.668595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.115 [2024-07-15 14:36:46.668604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.115 [2024-07-15 14:36:46.668618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.115 [2024-07-15 14:36:46.676325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.115 [2024-07-15 14:36:46.676440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.115 [2024-07-15 14:36:46.676460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.115 [2024-07-15 14:36:46.676471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.115 [2024-07-15 14:36:46.676487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.115 [2024-07-15 14:36:46.676502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.115 [2024-07-15 14:36:46.676511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.115 [2024-07-15 14:36:46.676519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.115 [2024-07-15 14:36:46.676534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.115 [2024-07-15 14:36:46.678481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.115 [2024-07-15 14:36:46.678566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.115 [2024-07-15 14:36:46.678587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.115 [2024-07-15 14:36:46.678597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.115 [2024-07-15 14:36:46.678613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.115 [2024-07-15 14:36:46.678627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.115 [2024-07-15 14:36:46.678636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.115 [2024-07-15 14:36:46.678645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.115 [2024-07-15 14:36:46.678660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.115 [2024-07-15 14:36:46.686397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.115 [2024-07-15 14:36:46.686484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.115 [2024-07-15 14:36:46.686504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.115 [2024-07-15 14:36:46.686517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.115 [2024-07-15 14:36:46.686533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.115 [2024-07-15 14:36:46.686548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.115 [2024-07-15 14:36:46.686556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.115 [2024-07-15 14:36:46.686565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.115 [2024-07-15 14:36:46.686579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.115 [2024-07-15 14:36:46.688537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.115 [2024-07-15 14:36:46.688652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.115 [2024-07-15 14:36:46.688672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.115 [2024-07-15 14:36:46.688683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.116 [2024-07-15 14:36:46.688699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.116 [2024-07-15 14:36:46.688725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.116 [2024-07-15 14:36:46.688736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.116 [2024-07-15 14:36:46.688745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.116 [2024-07-15 14:36:46.688759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.116 [2024-07-15 14:36:46.696453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.116 [2024-07-15 14:36:46.696571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.116 [2024-07-15 14:36:46.696592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.116 [2024-07-15 14:36:46.696602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.116 [2024-07-15 14:36:46.696618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.116 [2024-07-15 14:36:46.696633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.116 [2024-07-15 14:36:46.696642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.116 [2024-07-15 14:36:46.696651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.116 [2024-07-15 14:36:46.696666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.116 [2024-07-15 14:36:46.698623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.116 [2024-07-15 14:36:46.698721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.116 [2024-07-15 14:36:46.698743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.116 [2024-07-15 14:36:46.698753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.116 [2024-07-15 14:36:46.698770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.116 [2024-07-15 14:36:46.698784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.116 [2024-07-15 14:36:46.698793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.116 [2024-07-15 14:36:46.698802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.116 [2024-07-15 14:36:46.698816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.376 [2024-07-15 14:36:46.706524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.376 [2024-07-15 14:36:46.706610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.376 [2024-07-15 14:36:46.706631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.376 [2024-07-15 14:36:46.706642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.376 [2024-07-15 14:36:46.706659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.376 [2024-07-15 14:36:46.706673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.376 [2024-07-15 14:36:46.706681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.376 [2024-07-15 14:36:46.706690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.376 [2024-07-15 14:36:46.706718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.376 [2024-07-15 14:36:46.708678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.376 [2024-07-15 14:36:46.708802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.376 [2024-07-15 14:36:46.708823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.376 [2024-07-15 14:36:46.708834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.376 [2024-07-15 14:36:46.708850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.376 [2024-07-15 14:36:46.708865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.376 [2024-07-15 14:36:46.708873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.376 [2024-07-15 14:36:46.708882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.376 [2024-07-15 14:36:46.708897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.376 [2024-07-15 14:36:46.716580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.376 [2024-07-15 14:36:46.716705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.376 [2024-07-15 14:36:46.716740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.376 [2024-07-15 14:36:46.716752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.376 [2024-07-15 14:36:46.716769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.376 [2024-07-15 14:36:46.716784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.376 [2024-07-15 14:36:46.716792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.376 [2024-07-15 14:36:46.716801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.376 [2024-07-15 14:36:46.716817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.376 [2024-07-15 14:36:46.718772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.376 [2024-07-15 14:36:46.718889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.376 [2024-07-15 14:36:46.718910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.376 [2024-07-15 14:36:46.718921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.376 [2024-07-15 14:36:46.718937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.376 [2024-07-15 14:36:46.718952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.376 [2024-07-15 14:36:46.718961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.376 [2024-07-15 14:36:46.718970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.376 [2024-07-15 14:36:46.718984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.376 [2024-07-15 14:36:46.726655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.376 [2024-07-15 14:36:46.726786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.377 [2024-07-15 14:36:46.726807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.377 [2024-07-15 14:36:46.726818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.377 [2024-07-15 14:36:46.726834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.377 [2024-07-15 14:36:46.726849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.377 [2024-07-15 14:36:46.726858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.377 [2024-07-15 14:36:46.726867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.377 [2024-07-15 14:36:46.726882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.377 [2024-07-15 14:36:46.728857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.377 [2024-07-15 14:36:46.728971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.377 [2024-07-15 14:36:46.728991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.377 [2024-07-15 14:36:46.729002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.377 [2024-07-15 14:36:46.729018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.377 [2024-07-15 14:36:46.729033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.377 [2024-07-15 14:36:46.729042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.377 [2024-07-15 14:36:46.729051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.377 [2024-07-15 14:36:46.729065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.377 [2024-07-15 14:36:46.736751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.377 [2024-07-15 14:36:46.736868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.377 [2024-07-15 14:36:46.736889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.377 [2024-07-15 14:36:46.736900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.377 [2024-07-15 14:36:46.736916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.377 [2024-07-15 14:36:46.736931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.377 [2024-07-15 14:36:46.736940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.377 [2024-07-15 14:36:46.736948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.377 [2024-07-15 14:36:46.736963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.377 [2024-07-15 14:36:46.738926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.377 [2024-07-15 14:36:46.739040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.377 [2024-07-15 14:36:46.739062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.377 [2024-07-15 14:36:46.739072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.377 [2024-07-15 14:36:46.739089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.377 [2024-07-15 14:36:46.739103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.377 [2024-07-15 14:36:46.739112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.377 [2024-07-15 14:36:46.739121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.377 [2024-07-15 14:36:46.739135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.377 [2024-07-15 14:36:46.746821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.377 [2024-07-15 14:36:46.746937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.377 [2024-07-15 14:36:46.746957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.377 [2024-07-15 14:36:46.746968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.377 [2024-07-15 14:36:46.746984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.377 [2024-07-15 14:36:46.746999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.377 [2024-07-15 14:36:46.747008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.377 [2024-07-15 14:36:46.747017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.377 [2024-07-15 14:36:46.747032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.377 [2024-07-15 14:36:46.749009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.377 [2024-07-15 14:36:46.749124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.377 [2024-07-15 14:36:46.749144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.377 [2024-07-15 14:36:46.749155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.377 [2024-07-15 14:36:46.749171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.377 [2024-07-15 14:36:46.749186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.377 [2024-07-15 14:36:46.749194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.377 [2024-07-15 14:36:46.749203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.377 [2024-07-15 14:36:46.749218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.377 [2024-07-15 14:36:46.756893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.377 [2024-07-15 14:36:46.756979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.377 [2024-07-15 14:36:46.757000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.377 [2024-07-15 14:36:46.757010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.377 [2024-07-15 14:36:46.757027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.377 [2024-07-15 14:36:46.757042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.377 [2024-07-15 14:36:46.757050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.377 [2024-07-15 14:36:46.757059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.377 [2024-07-15 14:36:46.757074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.377 [2024-07-15 14:36:46.759077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.377 [2024-07-15 14:36:46.759166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.377 [2024-07-15 14:36:46.759187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.377 [2024-07-15 14:36:46.759197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.377 [2024-07-15 14:36:46.759213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.377 [2024-07-15 14:36:46.759227] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.377 [2024-07-15 14:36:46.759237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.377 [2024-07-15 14:36:46.759246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.377 [2024-07-15 14:36:46.759260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.377 [2024-07-15 14:36:46.766949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.377 [2024-07-15 14:36:46.767039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.377 [2024-07-15 14:36:46.767060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.377 [2024-07-15 14:36:46.767071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.377 [2024-07-15 14:36:46.767087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.377 [2024-07-15 14:36:46.767102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.377 [2024-07-15 14:36:46.767110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.377 [2024-07-15 14:36:46.767119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.377 [2024-07-15 14:36:46.767134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.377 [2024-07-15 14:36:46.769136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.377 [2024-07-15 14:36:46.769223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.377 [2024-07-15 14:36:46.769243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.377 [2024-07-15 14:36:46.769253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.377 [2024-07-15 14:36:46.769270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.377 [2024-07-15 14:36:46.769284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.377 [2024-07-15 14:36:46.769292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.377 [2024-07-15 14:36:46.769303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.377 [2024-07-15 14:36:46.769317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.377 [2024-07-15 14:36:46.777008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.377 [2024-07-15 14:36:46.777108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.377 [2024-07-15 14:36:46.777129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.377 [2024-07-15 14:36:46.777141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.377 [2024-07-15 14:36:46.777157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.377 [2024-07-15 14:36:46.777171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.377 [2024-07-15 14:36:46.777180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.377 [2024-07-15 14:36:46.777189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.378 [2024-07-15 14:36:46.777204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.378 [2024-07-15 14:36:46.779192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.378 [2024-07-15 14:36:46.779274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.378 [2024-07-15 14:36:46.779295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.378 [2024-07-15 14:36:46.779306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.378 [2024-07-15 14:36:46.779323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.378 [2024-07-15 14:36:46.779337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.378 [2024-07-15 14:36:46.779345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.378 [2024-07-15 14:36:46.779354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.378 [2024-07-15 14:36:46.779369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.378 [2024-07-15 14:36:46.787102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.378 [2024-07-15 14:36:46.787202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.378 [2024-07-15 14:36:46.787222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.378 [2024-07-15 14:36:46.787233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.378 [2024-07-15 14:36:46.787250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.378 [2024-07-15 14:36:46.787264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.378 [2024-07-15 14:36:46.787273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.378 [2024-07-15 14:36:46.787282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.378 [2024-07-15 14:36:46.787297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.378 [2024-07-15 14:36:46.789242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.378 [2024-07-15 14:36:46.789356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.378 [2024-07-15 14:36:46.789377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.378 [2024-07-15 14:36:46.789388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.378 [2024-07-15 14:36:46.789404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.378 [2024-07-15 14:36:46.789418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.378 [2024-07-15 14:36:46.789427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.378 [2024-07-15 14:36:46.789436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.378 [2024-07-15 14:36:46.789451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.378 [2024-07-15 14:36:46.797171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.378 [2024-07-15 14:36:46.797285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.378 [2024-07-15 14:36:46.797306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.378 [2024-07-15 14:36:46.797317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.378 [2024-07-15 14:36:46.797333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.378 [2024-07-15 14:36:46.797347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.378 [2024-07-15 14:36:46.797356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.378 [2024-07-15 14:36:46.797365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.378 [2024-07-15 14:36:46.797380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.378 [2024-07-15 14:36:46.799311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.378 [2024-07-15 14:36:46.799411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.378 [2024-07-15 14:36:46.799431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.378 [2024-07-15 14:36:46.799442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.378 [2024-07-15 14:36:46.799458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.378 [2024-07-15 14:36:46.799473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.378 [2024-07-15 14:36:46.799482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.378 [2024-07-15 14:36:46.799491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.378 [2024-07-15 14:36:46.799505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.378 [2024-07-15 14:36:46.807241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.378 [2024-07-15 14:36:46.807324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.378 [2024-07-15 14:36:46.807345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.378 [2024-07-15 14:36:46.807355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.378 [2024-07-15 14:36:46.807372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.378 [2024-07-15 14:36:46.807386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.378 [2024-07-15 14:36:46.807395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.378 [2024-07-15 14:36:46.807404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.378 [2024-07-15 14:36:46.807418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.378 [2024-07-15 14:36:46.809379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.378 [2024-07-15 14:36:46.809492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.378 [2024-07-15 14:36:46.809512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.378 [2024-07-15 14:36:46.809523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.378 [2024-07-15 14:36:46.809539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.378 [2024-07-15 14:36:46.809553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.378 [2024-07-15 14:36:46.809562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.378 [2024-07-15 14:36:46.809571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.378 [2024-07-15 14:36:46.809585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.378 [2024-07-15 14:36:46.817295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.378 [2024-07-15 14:36:46.817424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.378 [2024-07-15 14:36:46.817447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.378 [2024-07-15 14:36:46.817458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.378 [2024-07-15 14:36:46.817475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.378 [2024-07-15 14:36:46.817490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.378 [2024-07-15 14:36:46.817498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.378 [2024-07-15 14:36:46.817507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.378 [2024-07-15 14:36:46.817522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.378 [2024-07-15 14:36:46.819476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.378 [2024-07-15 14:36:46.819593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.378 [2024-07-15 14:36:46.819614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.378 [2024-07-15 14:36:46.819625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.378 [2024-07-15 14:36:46.819642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.378 [2024-07-15 14:36:46.819656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.378 [2024-07-15 14:36:46.819665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.378 [2024-07-15 14:36:46.819674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.378 [2024-07-15 14:36:46.819688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.378 [2024-07-15 14:36:46.827386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.378 [2024-07-15 14:36:46.827487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.378 [2024-07-15 14:36:46.827508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.378 [2024-07-15 14:36:46.827519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.378 [2024-07-15 14:36:46.827535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.378 [2024-07-15 14:36:46.827549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.378 [2024-07-15 14:36:46.827558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.378 [2024-07-15 14:36:46.827567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.378 [2024-07-15 14:36:46.827581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.378 [2024-07-15 14:36:46.829544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.378 [2024-07-15 14:36:46.829658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.378 [2024-07-15 14:36:46.829678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.378 [2024-07-15 14:36:46.829689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.378 [2024-07-15 14:36:46.829705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.378 [2024-07-15 14:36:46.829732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.378 [2024-07-15 14:36:46.829742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.379 [2024-07-15 14:36:46.829751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.379 [2024-07-15 14:36:46.829766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.379 [2024-07-15 14:36:46.837455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.379 [2024-07-15 14:36:46.837580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.379 [2024-07-15 14:36:46.837601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.379 [2024-07-15 14:36:46.837612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.379 [2024-07-15 14:36:46.837628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.379 [2024-07-15 14:36:46.837643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.379 [2024-07-15 14:36:46.837652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.379 [2024-07-15 14:36:46.837660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.379 [2024-07-15 14:36:46.837675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.379 [2024-07-15 14:36:46.839627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.379 [2024-07-15 14:36:46.839753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.379 [2024-07-15 14:36:46.839775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.379 [2024-07-15 14:36:46.839785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.379 [2024-07-15 14:36:46.839802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.379 [2024-07-15 14:36:46.839817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.379 [2024-07-15 14:36:46.839826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.379 [2024-07-15 14:36:46.839834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.379 [2024-07-15 14:36:46.839848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.379 [2024-07-15 14:36:46.847528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.379 [2024-07-15 14:36:46.847631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.379 [2024-07-15 14:36:46.847651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.379 [2024-07-15 14:36:46.847662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.379 [2024-07-15 14:36:46.847678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.379 [2024-07-15 14:36:46.847692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.379 [2024-07-15 14:36:46.847715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.379 [2024-07-15 14:36:46.847725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.379 [2024-07-15 14:36:46.847740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.379 [2024-07-15 14:36:46.849696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.379 [2024-07-15 14:36:46.849820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.379 [2024-07-15 14:36:46.849840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.379 [2024-07-15 14:36:46.849851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.379 [2024-07-15 14:36:46.849867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.379 [2024-07-15 14:36:46.849881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.379 [2024-07-15 14:36:46.849890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.379 [2024-07-15 14:36:46.849899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.379 [2024-07-15 14:36:46.849914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.379 [2024-07-15 14:36:46.857599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.379 [2024-07-15 14:36:46.857747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.379 [2024-07-15 14:36:46.857768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.379 [2024-07-15 14:36:46.857779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.379 [2024-07-15 14:36:46.857797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.379 [2024-07-15 14:36:46.857811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.379 [2024-07-15 14:36:46.857819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.379 [2024-07-15 14:36:46.857829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.379 [2024-07-15 14:36:46.857843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.379 [2024-07-15 14:36:46.859793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.379 [2024-07-15 14:36:46.859908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.379 [2024-07-15 14:36:46.859929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.379 [2024-07-15 14:36:46.859939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.379 [2024-07-15 14:36:46.859956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.379 [2024-07-15 14:36:46.859970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.379 [2024-07-15 14:36:46.859979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.379 [2024-07-15 14:36:46.859988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.379 [2024-07-15 14:36:46.860002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.379 [2024-07-15 14:36:46.867670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.379 [2024-07-15 14:36:46.867792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.379 [2024-07-15 14:36:46.867814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.379 [2024-07-15 14:36:46.867825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.379 [2024-07-15 14:36:46.867841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.379 [2024-07-15 14:36:46.867856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.379 [2024-07-15 14:36:46.867864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.379 [2024-07-15 14:36:46.867873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.379 [2024-07-15 14:36:46.867888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.379 [2024-07-15 14:36:46.869875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.379 [2024-07-15 14:36:46.870005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.379 [2024-07-15 14:36:46.870026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.379 [2024-07-15 14:36:46.870036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.379 [2024-07-15 14:36:46.870053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.379 [2024-07-15 14:36:46.870067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.379 [2024-07-15 14:36:46.870076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.379 [2024-07-15 14:36:46.870085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.379 [2024-07-15 14:36:46.870099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.379 [2024-07-15 14:36:46.877759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.379 [2024-07-15 14:36:46.877874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.379 [2024-07-15 14:36:46.877895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.379 [2024-07-15 14:36:46.877906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.379 [2024-07-15 14:36:46.877922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.379 [2024-07-15 14:36:46.877936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.379 [2024-07-15 14:36:46.877945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.379 [2024-07-15 14:36:46.877954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.379 [2024-07-15 14:36:46.877968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.379 [2024-07-15 14:36:46.879975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.379 [2024-07-15 14:36:46.880075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.379 [2024-07-15 14:36:46.880095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.379 [2024-07-15 14:36:46.880106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.379 [2024-07-15 14:36:46.880122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.379 [2024-07-15 14:36:46.880136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.379 [2024-07-15 14:36:46.880145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.379 [2024-07-15 14:36:46.880154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.379 [2024-07-15 14:36:46.880168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.379 [2024-07-15 14:36:46.887829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.379 [2024-07-15 14:36:46.887918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.379 [2024-07-15 14:36:46.887939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.379 [2024-07-15 14:36:46.887950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.379 [2024-07-15 14:36:46.887967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.379 [2024-07-15 14:36:46.887981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.379 [2024-07-15 14:36:46.887990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.379 [2024-07-15 14:36:46.887998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.380 [2024-07-15 14:36:46.888013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.380 [2024-07-15 14:36:46.890047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.380 [2024-07-15 14:36:46.890148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.380 [2024-07-15 14:36:46.890168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.380 [2024-07-15 14:36:46.890178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.380 [2024-07-15 14:36:46.890194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.380 [2024-07-15 14:36:46.890208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.380 [2024-07-15 14:36:46.890217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.380 [2024-07-15 14:36:46.890226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.380 [2024-07-15 14:36:46.890241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.380 [2024-07-15 14:36:46.897884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.380 [2024-07-15 14:36:46.897999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.380 [2024-07-15 14:36:46.898020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.380 [2024-07-15 14:36:46.898031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.380 [2024-07-15 14:36:46.898048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.380 [2024-07-15 14:36:46.898062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.380 [2024-07-15 14:36:46.898070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.380 [2024-07-15 14:36:46.898079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.380 [2024-07-15 14:36:46.898094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.380 [2024-07-15 14:36:46.900117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.380 [2024-07-15 14:36:46.900246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.380 [2024-07-15 14:36:46.900266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.380 [2024-07-15 14:36:46.900277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.380 [2024-07-15 14:36:46.900293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.380 [2024-07-15 14:36:46.900317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.380 [2024-07-15 14:36:46.900328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.380 [2024-07-15 14:36:46.900337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.380 [2024-07-15 14:36:46.900351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.380 [2024-07-15 14:36:46.907967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.380 [2024-07-15 14:36:46.908069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.380 [2024-07-15 14:36:46.908089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.380 [2024-07-15 14:36:46.908100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.380 [2024-07-15 14:36:46.908116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.380 [2024-07-15 14:36:46.908130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.380 [2024-07-15 14:36:46.908139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.380 [2024-07-15 14:36:46.908148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.380 [2024-07-15 14:36:46.908162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.380 [2024-07-15 14:36:46.910201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.380 [2024-07-15 14:36:46.910285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.380 [2024-07-15 14:36:46.910305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.380 [2024-07-15 14:36:46.910315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.380 [2024-07-15 14:36:46.910345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.380 [2024-07-15 14:36:46.910361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.380 [2024-07-15 14:36:46.910370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.380 [2024-07-15 14:36:46.910378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.380 [2024-07-15 14:36:46.910393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.380 [2024-07-15 14:36:46.918039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.380 [2024-07-15 14:36:46.918140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.380 [2024-07-15 14:36:46.918161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.380 [2024-07-15 14:36:46.918171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.380 [2024-07-15 14:36:46.918188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.380 [2024-07-15 14:36:46.918202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.380 [2024-07-15 14:36:46.918211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.380 [2024-07-15 14:36:46.918220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.380 [2024-07-15 14:36:46.918234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.380 [2024-07-15 14:36:46.920254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.380 [2024-07-15 14:36:46.920376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.380 [2024-07-15 14:36:46.920398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.380 [2024-07-15 14:36:46.920410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.380 [2024-07-15 14:36:46.920436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.380 [2024-07-15 14:36:46.920452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.380 [2024-07-15 14:36:46.920461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.380 [2024-07-15 14:36:46.920470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.380 [2024-07-15 14:36:46.920485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.380 [2024-07-15 14:36:46.928124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.380 [2024-07-15 14:36:46.928225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.380 [2024-07-15 14:36:46.928246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.380 [2024-07-15 14:36:46.928257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.380 [2024-07-15 14:36:46.928273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.380 [2024-07-15 14:36:46.928287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.380 [2024-07-15 14:36:46.928296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.380 [2024-07-15 14:36:46.928305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.380 [2024-07-15 14:36:46.928319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.380 [2024-07-15 14:36:46.930345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.380 [2024-07-15 14:36:46.930444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.381 [2024-07-15 14:36:46.930464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.381 [2024-07-15 14:36:46.930475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.381 [2024-07-15 14:36:46.930492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.381 [2024-07-15 14:36:46.930506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.381 [2024-07-15 14:36:46.930516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.381 [2024-07-15 14:36:46.930524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.381 [2024-07-15 14:36:46.930539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.381 [2024-07-15 14:36:46.938195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.381 [2024-07-15 14:36:46.938289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.381 [2024-07-15 14:36:46.938310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.381 [2024-07-15 14:36:46.938333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.381 [2024-07-15 14:36:46.938351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.381 [2024-07-15 14:36:46.938366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.381 [2024-07-15 14:36:46.938377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.381 [2024-07-15 14:36:46.938391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.381 [2024-07-15 14:36:46.938414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.381 [2024-07-15 14:36:46.940403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.381 [2024-07-15 14:36:46.940490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.381 [2024-07-15 14:36:46.940510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.381 [2024-07-15 14:36:46.940520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.381 [2024-07-15 14:36:46.940537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.381 [2024-07-15 14:36:46.940551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.381 [2024-07-15 14:36:46.940560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.381 [2024-07-15 14:36:46.940569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.381 [2024-07-15 14:36:46.940584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.381 [2024-07-15 14:36:46.948257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.381 [2024-07-15 14:36:46.948346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.381 [2024-07-15 14:36:46.948366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.381 [2024-07-15 14:36:46.948377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.381 [2024-07-15 14:36:46.948394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.381 [2024-07-15 14:36:46.948419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.381 [2024-07-15 14:36:46.948429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.381 [2024-07-15 14:36:46.948438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.381 [2024-07-15 14:36:46.948453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.381 [2024-07-15 14:36:46.950457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.381 [2024-07-15 14:36:46.950543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.381 [2024-07-15 14:36:46.950564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.381 [2024-07-15 14:36:46.950574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.381 [2024-07-15 14:36:46.950591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.381 [2024-07-15 14:36:46.950605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.381 [2024-07-15 14:36:46.950614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.381 [2024-07-15 14:36:46.950623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.381 [2024-07-15 14:36:46.950638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.381 [2024-07-15 14:36:46.958314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.381 [2024-07-15 14:36:46.958414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.381 [2024-07-15 14:36:46.958435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.381 [2024-07-15 14:36:46.958446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.381 [2024-07-15 14:36:46.958462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.381 [2024-07-15 14:36:46.958476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.381 [2024-07-15 14:36:46.958485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.381 [2024-07-15 14:36:46.958494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.381 [2024-07-15 14:36:46.958508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.381 [2024-07-15 14:36:46.960512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.381 [2024-07-15 14:36:46.960597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.381 [2024-07-15 14:36:46.960617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.381 [2024-07-15 14:36:46.960627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.381 [2024-07-15 14:36:46.960644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.381 [2024-07-15 14:36:46.960658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.381 [2024-07-15 14:36:46.960667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.381 [2024-07-15 14:36:46.960676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.381 [2024-07-15 14:36:46.960690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.381 [2024-07-15 14:36:46.968383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.381 [2024-07-15 14:36:46.968468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.381 [2024-07-15 14:36:46.968488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.381 [2024-07-15 14:36:46.968499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.381 [2024-07-15 14:36:46.968525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.381 [2024-07-15 14:36:46.968541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.381 [2024-07-15 14:36:46.968549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.381 [2024-07-15 14:36:46.968563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.381 [2024-07-15 14:36:46.968585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.642 [2024-07-15 14:36:46.970568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.642 [2024-07-15 14:36:46.970655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.642 [2024-07-15 14:36:46.970675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.642 [2024-07-15 14:36:46.970686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.642 [2024-07-15 14:36:46.970714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.642 [2024-07-15 14:36:46.970733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.642 [2024-07-15 14:36:46.970748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.642 [2024-07-15 14:36:46.970763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.642 [2024-07-15 14:36:46.970782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.642 [2024-07-15 14:36:46.978439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.642 [2024-07-15 14:36:46.978526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.642 [2024-07-15 14:36:46.978547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.642 [2024-07-15 14:36:46.978557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.642 [2024-07-15 14:36:46.978573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.642 [2024-07-15 14:36:46.978587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.642 [2024-07-15 14:36:46.978596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.642 [2024-07-15 14:36:46.978604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.642 [2024-07-15 14:36:46.978625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.642 [2024-07-15 14:36:46.980622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.642 [2024-07-15 14:36:46.980732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.642 [2024-07-15 14:36:46.980753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.642 [2024-07-15 14:36:46.980763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.642 [2024-07-15 14:36:46.980779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.642 [2024-07-15 14:36:46.980793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.642 [2024-07-15 14:36:46.980822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.642 [2024-07-15 14:36:46.980836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.642 [2024-07-15 14:36:46.980856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.643 [2024-07-15 14:36:46.988495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.643 [2024-07-15 14:36:46.988593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.643 [2024-07-15 14:36:46.988613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.643 [2024-07-15 14:36:46.988623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.643 [2024-07-15 14:36:46.988639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.643 [2024-07-15 14:36:46.988669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.643 [2024-07-15 14:36:46.988683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.643 [2024-07-15 14:36:46.988697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.643 [2024-07-15 14:36:46.988715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.643 [2024-07-15 14:36:46.990676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.643 [2024-07-15 14:36:46.990797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.643 [2024-07-15 14:36:46.990817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.643 [2024-07-15 14:36:46.990828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.643 [2024-07-15 14:36:46.990844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.643 [2024-07-15 14:36:46.990877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.643 [2024-07-15 14:36:46.990892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.643 [2024-07-15 14:36:46.990905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.643 [2024-07-15 14:36:46.990921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.643 [2024-07-15 14:36:46.998548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.643 [2024-07-15 14:36:46.998633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.643 [2024-07-15 14:36:46.998653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.643 [2024-07-15 14:36:46.998664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.643 [2024-07-15 14:36:46.998695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.643 [2024-07-15 14:36:46.998740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.643 [2024-07-15 14:36:46.998758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.643 [2024-07-15 14:36:46.998768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.643 [2024-07-15 14:36:46.998786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.643 [2024-07-15 14:36:47.000758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.643 [2024-07-15 14:36:47.000854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.643 [2024-07-15 14:36:47.000874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.643 [2024-07-15 14:36:47.000883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.643 [2024-07-15 14:36:47.000899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.643 [2024-07-15 14:36:47.000934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.643 [2024-07-15 14:36:47.000948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.643 [2024-07-15 14:36:47.000960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.643 [2024-07-15 14:36:47.000975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.643 [2024-07-15 14:36:47.008602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.643 [2024-07-15 14:36:47.008699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.643 [2024-07-15 14:36:47.008733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.643 [2024-07-15 14:36:47.008745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.643 [2024-07-15 14:36:47.008761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.643 [2024-07-15 14:36:47.008791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.643 [2024-07-15 14:36:47.008803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.643 [2024-07-15 14:36:47.008817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.643 [2024-07-15 14:36:47.008837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.643 [2024-07-15 14:36:47.010808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.643 [2024-07-15 14:36:47.010905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.643 [2024-07-15 14:36:47.010925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.643 [2024-07-15 14:36:47.010935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.643 [2024-07-15 14:36:47.010951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.643 [2024-07-15 14:36:47.010986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.643 [2024-07-15 14:36:47.011001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.643 [2024-07-15 14:36:47.011013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.643 [2024-07-15 14:36:47.011028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.643 [2024-07-15 14:36:47.018654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.643 [2024-07-15 14:36:47.018749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.643 [2024-07-15 14:36:47.018771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.643 [2024-07-15 14:36:47.018781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.643 [2024-07-15 14:36:47.018798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.643 [2024-07-15 14:36:47.018812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.643 [2024-07-15 14:36:47.018822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.643 [2024-07-15 14:36:47.018836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.643 [2024-07-15 14:36:47.018857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.643 [2024-07-15 14:36:47.020860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.643 [2024-07-15 14:36:47.020957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.643 [2024-07-15 14:36:47.020977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.643 [2024-07-15 14:36:47.020988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.643 [2024-07-15 14:36:47.021004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.643 [2024-07-15 14:36:47.021022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.643 [2024-07-15 14:36:47.021036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.643 [2024-07-15 14:36:47.021050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.643 [2024-07-15 14:36:47.021068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.643 [2024-07-15 14:36:47.028719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.643 [2024-07-15 14:36:47.028805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.643 [2024-07-15 14:36:47.028826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.643 [2024-07-15 14:36:47.028837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.643 [2024-07-15 14:36:47.028853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.643 [2024-07-15 14:36:47.028867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.643 [2024-07-15 14:36:47.028876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.643 [2024-07-15 14:36:47.028887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.643 [2024-07-15 14:36:47.028909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.643 [2024-07-15 14:36:47.030912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.643 [2024-07-15 14:36:47.030995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.643 [2024-07-15 14:36:47.031016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.643 [2024-07-15 14:36:47.031027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.643 [2024-07-15 14:36:47.031043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.643 [2024-07-15 14:36:47.031057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.643 [2024-07-15 14:36:47.031066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.643 [2024-07-15 14:36:47.031079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.643 [2024-07-15 14:36:47.031101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.643 [2024-07-15 14:36:47.038777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.643 [2024-07-15 14:36:47.038863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.643 [2024-07-15 14:36:47.038883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.643 [2024-07-15 14:36:47.038894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.643 [2024-07-15 14:36:47.038910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.643 [2024-07-15 14:36:47.038924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.643 [2024-07-15 14:36:47.038933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.643 [2024-07-15 14:36:47.038943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.644 [2024-07-15 14:36:47.038964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.644 [2024-07-15 14:36:47.040965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.644 [2024-07-15 14:36:47.041050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.644 [2024-07-15 14:36:47.041071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.644 [2024-07-15 14:36:47.041081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.644 [2024-07-15 14:36:47.041097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.644 [2024-07-15 14:36:47.041111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.644 [2024-07-15 14:36:47.041120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.644 [2024-07-15 14:36:47.041131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.644 [2024-07-15 14:36:47.041152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.644 [2024-07-15 14:36:47.048833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.644 [2024-07-15 14:36:47.048917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.644 [2024-07-15 14:36:47.048937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.644 [2024-07-15 14:36:47.048948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.644 [2024-07-15 14:36:47.048964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.644 [2024-07-15 14:36:47.048979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.644 [2024-07-15 14:36:47.048987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.644 [2024-07-15 14:36:47.048997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.644 [2024-07-15 14:36:47.049018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.644 [2024-07-15 14:36:47.051019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.644 [2024-07-15 14:36:47.051132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.644 [2024-07-15 14:36:47.051152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.644 [2024-07-15 14:36:47.051163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.644 [2024-07-15 14:36:47.051179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.644 [2024-07-15 14:36:47.051193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.644 [2024-07-15 14:36:47.051202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.644 [2024-07-15 14:36:47.051212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.644 [2024-07-15 14:36:47.051233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.644 [2024-07-15 14:36:47.058888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.644 [2024-07-15 14:36:47.058974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.644 [2024-07-15 14:36:47.058995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.644 [2024-07-15 14:36:47.059006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.644 [2024-07-15 14:36:47.059022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.644 [2024-07-15 14:36:47.059036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.644 [2024-07-15 14:36:47.059044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.644 [2024-07-15 14:36:47.059058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.644 [2024-07-15 14:36:47.059080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.644 [2024-07-15 14:36:47.061086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.644 [2024-07-15 14:36:47.061183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.644 [2024-07-15 14:36:47.061203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.644 [2024-07-15 14:36:47.061213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.644 [2024-07-15 14:36:47.061229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.644 [2024-07-15 14:36:47.061243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.644 [2024-07-15 14:36:47.061253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.644 [2024-07-15 14:36:47.061266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.644 [2024-07-15 14:36:47.061288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.644 [2024-07-15 14:36:47.068944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.644 [2024-07-15 14:36:47.069028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.644 [2024-07-15 14:36:47.069049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.644 [2024-07-15 14:36:47.069059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.644 [2024-07-15 14:36:47.069075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.644 [2024-07-15 14:36:47.069089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.644 [2024-07-15 14:36:47.069098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.644 [2024-07-15 14:36:47.069108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.644 [2024-07-15 14:36:47.069130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.644 [2024-07-15 14:36:47.071139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.644 [2024-07-15 14:36:47.071225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.644 [2024-07-15 14:36:47.071245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.644 [2024-07-15 14:36:47.071256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.644 [2024-07-15 14:36:47.071272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.644 [2024-07-15 14:36:47.071286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.644 [2024-07-15 14:36:47.071295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.644 [2024-07-15 14:36:47.071307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.644 [2024-07-15 14:36:47.071328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.644 [2024-07-15 14:36:47.078999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.644 [2024-07-15 14:36:47.079100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.644 [2024-07-15 14:36:47.079121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.644 [2024-07-15 14:36:47.079132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.644 [2024-07-15 14:36:47.079148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.644 [2024-07-15 14:36:47.079162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.644 [2024-07-15 14:36:47.079174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.644 [2024-07-15 14:36:47.079188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.644 [2024-07-15 14:36:47.079209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.644 [2024-07-15 14:36:47.081192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.644 [2024-07-15 14:36:47.081290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.644 [2024-07-15 14:36:47.081310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.644 [2024-07-15 14:36:47.081320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.644 [2024-07-15 14:36:47.081336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.644 [2024-07-15 14:36:47.081350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.644 [2024-07-15 14:36:47.081363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.644 [2024-07-15 14:36:47.081378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.644 [2024-07-15 14:36:47.081398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.644 [2024-07-15 14:36:47.089054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.644 [2024-07-15 14:36:47.089153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.644 [2024-07-15 14:36:47.089173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.644 [2024-07-15 14:36:47.089184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.644 [2024-07-15 14:36:47.089200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.644 [2024-07-15 14:36:47.089214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.644 [2024-07-15 14:36:47.089224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.644 [2024-07-15 14:36:47.089237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.644 [2024-07-15 14:36:47.089260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.644 [2024-07-15 14:36:47.091246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.644 [2024-07-15 14:36:47.091345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.644 [2024-07-15 14:36:47.091365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.644 [2024-07-15 14:36:47.091376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.644 [2024-07-15 14:36:47.091392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.644 [2024-07-15 14:36:47.091407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.645 [2024-07-15 14:36:47.091420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.645 [2024-07-15 14:36:47.091434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.645 [2024-07-15 14:36:47.091454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.645 [2024-07-15 14:36:47.099108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.645 [2024-07-15 14:36:47.099210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.645 [2024-07-15 14:36:47.099231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.645 [2024-07-15 14:36:47.099241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.645 [2024-07-15 14:36:47.099258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.645 [2024-07-15 14:36:47.099272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.645 [2024-07-15 14:36:47.099281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.645 [2024-07-15 14:36:47.099294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.645 [2024-07-15 14:36:47.099317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.645 [2024-07-15 14:36:47.101298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.645 [2024-07-15 14:36:47.101396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.645 [2024-07-15 14:36:47.101415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.645 [2024-07-15 14:36:47.101426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.645 [2024-07-15 14:36:47.101442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.645 [2024-07-15 14:36:47.101456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.645 [2024-07-15 14:36:47.101465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.645 [2024-07-15 14:36:47.101476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.645 [2024-07-15 14:36:47.101498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.645 [2024-07-15 14:36:47.109162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.645 [2024-07-15 14:36:47.109260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.645 [2024-07-15 14:36:47.109280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.645 [2024-07-15 14:36:47.109290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.645 [2024-07-15 14:36:47.109306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.645 [2024-07-15 14:36:47.109319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.645 [2024-07-15 14:36:47.109328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.645 [2024-07-15 14:36:47.109336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.645 [2024-07-15 14:36:47.109368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.645 [2024-07-15 14:36:47.111351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.645 [2024-07-15 14:36:47.111465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.645 [2024-07-15 14:36:47.111485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.645 [2024-07-15 14:36:47.111495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.645 [2024-07-15 14:36:47.111511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.645 [2024-07-15 14:36:47.111541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.645 [2024-07-15 14:36:47.111554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.645 [2024-07-15 14:36:47.111568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.645 [2024-07-15 14:36:47.111587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.645 [2024-07-15 14:36:47.119214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.645 [2024-07-15 14:36:47.119313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.645 [2024-07-15 14:36:47.119333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.645 [2024-07-15 14:36:47.119344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.645 [2024-07-15 14:36:47.119359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.645 [2024-07-15 14:36:47.119373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.645 [2024-07-15 14:36:47.119381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.645 [2024-07-15 14:36:47.119407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.645 [2024-07-15 14:36:47.119428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.645 [2024-07-15 14:36:47.121404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.645 [2024-07-15 14:36:47.121516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.645 [2024-07-15 14:36:47.121536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.645 [2024-07-15 14:36:47.121546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.645 [2024-07-15 14:36:47.121561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.645 [2024-07-15 14:36:47.121574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.645 [2024-07-15 14:36:47.121583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.645 [2024-07-15 14:36:47.121609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.645 [2024-07-15 14:36:47.121630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.645 [2024-07-15 14:36:47.129268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.645 [2024-07-15 14:36:47.129374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.645 [2024-07-15 14:36:47.129395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.645 [2024-07-15 14:36:47.129405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.645 [2024-07-15 14:36:47.129421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.645 [2024-07-15 14:36:47.129434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.645 [2024-07-15 14:36:47.129459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.645 [2024-07-15 14:36:47.129472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.645 [2024-07-15 14:36:47.129494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.645 [2024-07-15 14:36:47.131457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.645 [2024-07-15 14:36:47.131555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.645 [2024-07-15 14:36:47.131575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.645 [2024-07-15 14:36:47.131585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.645 [2024-07-15 14:36:47.131600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.645 [2024-07-15 14:36:47.131630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.645 [2024-07-15 14:36:47.131642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.645 [2024-07-15 14:36:47.131657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.645 [2024-07-15 14:36:47.131677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.645 [2024-07-15 14:36:47.139326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.645 [2024-07-15 14:36:47.139427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.645 [2024-07-15 14:36:47.139447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.645 [2024-07-15 14:36:47.139458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.645 [2024-07-15 14:36:47.139473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.645 [2024-07-15 14:36:47.139487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.645 [2024-07-15 14:36:47.139515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.645 [2024-07-15 14:36:47.139529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.645 [2024-07-15 14:36:47.139550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.645 [2024-07-15 14:36:47.141510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.645 [2024-07-15 14:36:47.141595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.645 [2024-07-15 14:36:47.141615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.645 [2024-07-15 14:36:47.141626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.645 [2024-07-15 14:36:47.141642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.645 [2024-07-15 14:36:47.141657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.645 [2024-07-15 14:36:47.141670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.645 [2024-07-15 14:36:47.141685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.645 [2024-07-15 14:36:47.141734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.645 [2024-07-15 14:36:47.149381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.645 [2024-07-15 14:36:47.149482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.645 [2024-07-15 14:36:47.149503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.646 [2024-07-15 14:36:47.149513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.646 [2024-07-15 14:36:47.149529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.646 [2024-07-15 14:36:47.149559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.646 [2024-07-15 14:36:47.149572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.646 [2024-07-15 14:36:47.149587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.646 [2024-07-15 14:36:47.149605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.646 [2024-07-15 14:36:47.151564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.646 [2024-07-15 14:36:47.151663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.646 [2024-07-15 14:36:47.151683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.646 [2024-07-15 14:36:47.151693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.646 [2024-07-15 14:36:47.151709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.646 [2024-07-15 14:36:47.151760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.646 [2024-07-15 14:36:47.151773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.646 [2024-07-15 14:36:47.151783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.646 [2024-07-15 14:36:47.151800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.646 [2024-07-15 14:36:47.159434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.646 [2024-07-15 14:36:47.159535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.646 [2024-07-15 14:36:47.159556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.646 [2024-07-15 14:36:47.159566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.646 [2024-07-15 14:36:47.159582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.646 [2024-07-15 14:36:47.159597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.646 [2024-07-15 14:36:47.159612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.646 [2024-07-15 14:36:47.159626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.646 [2024-07-15 14:36:47.159645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.646 [2024-07-15 14:36:47.161617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.646 [2024-07-15 14:36:47.161714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.646 [2024-07-15 14:36:47.161735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.646 [2024-07-15 14:36:47.161745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.646 [2024-07-15 14:36:47.161762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.646 [2024-07-15 14:36:47.161776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.646 [2024-07-15 14:36:47.161786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.646 [2024-07-15 14:36:47.161799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.646 [2024-07-15 14:36:47.161821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.646 [2024-07-15 14:36:47.169502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.646 [2024-07-15 14:36:47.169601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.646 [2024-07-15 14:36:47.169622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.646 [2024-07-15 14:36:47.169632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.646 [2024-07-15 14:36:47.169648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.646 [2024-07-15 14:36:47.169663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.646 [2024-07-15 14:36:47.169673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.646 [2024-07-15 14:36:47.169688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.646 [2024-07-15 14:36:47.169709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.646 [2024-07-15 14:36:47.171671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.646 [2024-07-15 14:36:47.171776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.646 [2024-07-15 14:36:47.171798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.646 [2024-07-15 14:36:47.171809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.646 [2024-07-15 14:36:47.171825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.646 [2024-07-15 14:36:47.171839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.646 [2024-07-15 14:36:47.171853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.646 [2024-07-15 14:36:47.171867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.646 [2024-07-15 14:36:47.171886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.646 [2024-07-15 14:36:47.179554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.646 [2024-07-15 14:36:47.179654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.646 [2024-07-15 14:36:47.179675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.646 [2024-07-15 14:36:47.179686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.646 [2024-07-15 14:36:47.179702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.646 [2024-07-15 14:36:47.179732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.646 [2024-07-15 14:36:47.179751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.646 [2024-07-15 14:36:47.179765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.646 [2024-07-15 14:36:47.179781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.646 [2024-07-15 14:36:47.181760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.646 [2024-07-15 14:36:47.181858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.646 [2024-07-15 14:36:47.181878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.646 [2024-07-15 14:36:47.181889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.646 [2024-07-15 14:36:47.181904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.646 [2024-07-15 14:36:47.181920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.646 [2024-07-15 14:36:47.181934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.646 [2024-07-15 14:36:47.181949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.646 [2024-07-15 14:36:47.181968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.646 [2024-07-15 14:36:47.189608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.646 [2024-07-15 14:36:47.189708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.646 [2024-07-15 14:36:47.189743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.646 [2024-07-15 14:36:47.189754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.646 [2024-07-15 14:36:47.189770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.646 [2024-07-15 14:36:47.189784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.646 [2024-07-15 14:36:47.189796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.646 [2024-07-15 14:36:47.189811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.646 [2024-07-15 14:36:47.189832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.646 [2024-07-15 14:36:47.191812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.646 [2024-07-15 14:36:47.191910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.646 [2024-07-15 14:36:47.191931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.646 [2024-07-15 14:36:47.191941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.646 [2024-07-15 14:36:47.191958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.646 [2024-07-15 14:36:47.191973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.646 [2024-07-15 14:36:47.191987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.646 [2024-07-15 14:36:47.192002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.647 [2024-07-15 14:36:47.192019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.647 [2024-07-15 14:36:47.199662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.647 [2024-07-15 14:36:47.199771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.647 [2024-07-15 14:36:47.199793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.647 [2024-07-15 14:36:47.199804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.647 [2024-07-15 14:36:47.199820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.647 [2024-07-15 14:36:47.199840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.647 [2024-07-15 14:36:47.199856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.647 [2024-07-15 14:36:47.199866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.647 [2024-07-15 14:36:47.199881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.647 [2024-07-15 14:36:47.201865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.647 [2024-07-15 14:36:47.201964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.647 [2024-07-15 14:36:47.201984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.647 [2024-07-15 14:36:47.201995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.647 [2024-07-15 14:36:47.202011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.647 [2024-07-15 14:36:47.202033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.647 [2024-07-15 14:36:47.202047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.647 [2024-07-15 14:36:47.202056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.647 [2024-07-15 14:36:47.202071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.647 [2024-07-15 14:36:47.209741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.647 [2024-07-15 14:36:47.209842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.647 [2024-07-15 14:36:47.209863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.647 [2024-07-15 14:36:47.209873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.647 [2024-07-15 14:36:47.209889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.647 [2024-07-15 14:36:47.209910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.647 [2024-07-15 14:36:47.209926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.647 [2024-07-15 14:36:47.209936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.647 [2024-07-15 14:36:47.209951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.647 [2024-07-15 14:36:47.211918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.647 [2024-07-15 14:36:47.212018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.647 [2024-07-15 14:36:47.212039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.647 [2024-07-15 14:36:47.212049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.647 [2024-07-15 14:36:47.212066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.647 [2024-07-15 14:36:47.212088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.647 [2024-07-15 14:36:47.212101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.647 [2024-07-15 14:36:47.212111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.647 [2024-07-15 14:36:47.212126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.647 [2024-07-15 14:36:47.219795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.647 [2024-07-15 14:36:47.219912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.647 [2024-07-15 14:36:47.219932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.647 [2024-07-15 14:36:47.219943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.647 [2024-07-15 14:36:47.219959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.647 [2024-07-15 14:36:47.219977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.647 [2024-07-15 14:36:47.219992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.647 [2024-07-15 14:36:47.220004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.647 [2024-07-15 14:36:47.220020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.647 [2024-07-15 14:36:47.221976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.647 [2024-07-15 14:36:47.222059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.647 [2024-07-15 14:36:47.222078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.647 [2024-07-15 14:36:47.222089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.647 [2024-07-15 14:36:47.222108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.647 [2024-07-15 14:36:47.222130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.647 [2024-07-15 14:36:47.222145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.647 [2024-07-15 14:36:47.222160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.647 [2024-07-15 14:36:47.222179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.647 [2024-07-15 14:36:47.229866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.647 [2024-07-15 14:36:47.229951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.647 [2024-07-15 14:36:47.229972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.647 [2024-07-15 14:36:47.229982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.647 [2024-07-15 14:36:47.229999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.647 [2024-07-15 14:36:47.230014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.647 [2024-07-15 14:36:47.230028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.647 [2024-07-15 14:36:47.230042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.647 [2024-07-15 14:36:47.230059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.647 [2024-07-15 14:36:47.232031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.647 [2024-07-15 14:36:47.232119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.647 [2024-07-15 14:36:47.232140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.647 [2024-07-15 14:36:47.232150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.647 [2024-07-15 14:36:47.232169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.647 [2024-07-15 14:36:47.232192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.647 [2024-07-15 14:36:47.232205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.647 [2024-07-15 14:36:47.232220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.647 [2024-07-15 14:36:47.232239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.909 [2024-07-15 14:36:47.239921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.909 [2024-07-15 14:36:47.240007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.909 [2024-07-15 14:36:47.240028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.909 [2024-07-15 14:36:47.240039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.909 [2024-07-15 14:36:47.240055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.909 [2024-07-15 14:36:47.240070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.909 [2024-07-15 14:36:47.240078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.909 [2024-07-15 14:36:47.240090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.909 [2024-07-15 14:36:47.240112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.909 [2024-07-15 14:36:47.242086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.909 [2024-07-15 14:36:47.242170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.909 [2024-07-15 14:36:47.242191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.909 [2024-07-15 14:36:47.242201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.909 [2024-07-15 14:36:47.242217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.909 [2024-07-15 14:36:47.242232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.909 [2024-07-15 14:36:47.242258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.909 [2024-07-15 14:36:47.242273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.909 [2024-07-15 14:36:47.242289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.909 [2024-07-15 14:36:47.249975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.909 [2024-07-15 14:36:47.250077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.909 [2024-07-15 14:36:47.250098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.909 [2024-07-15 14:36:47.250109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.909 [2024-07-15 14:36:47.250125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.909 [2024-07-15 14:36:47.250144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.909 [2024-07-15 14:36:47.250159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.909 [2024-07-15 14:36:47.250171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.909 [2024-07-15 14:36:47.250186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.909 [2024-07-15 14:36:47.252139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.909 [2024-07-15 14:36:47.252248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.909 [2024-07-15 14:36:47.252268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.909 [2024-07-15 14:36:47.252279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.909 [2024-07-15 14:36:47.252295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.909 [2024-07-15 14:36:47.252309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.909 [2024-07-15 14:36:47.252318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.909 [2024-07-15 14:36:47.252327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.909 [2024-07-15 14:36:47.252348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.909 [2024-07-15 14:36:47.260063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.909 [2024-07-15 14:36:47.260151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.909 [2024-07-15 14:36:47.260172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.909 [2024-07-15 14:36:47.260183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.909 [2024-07-15 14:36:47.260199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.909 [2024-07-15 14:36:47.260214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.909 [2024-07-15 14:36:47.260228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.909 [2024-07-15 14:36:47.260242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.909 [2024-07-15 14:36:47.260269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.909 [2024-07-15 14:36:47.262208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.909 [2024-07-15 14:36:47.262293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.909 [2024-07-15 14:36:47.262313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.909 [2024-07-15 14:36:47.262337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.909 [2024-07-15 14:36:47.262355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.909 [2024-07-15 14:36:47.262371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.909 [2024-07-15 14:36:47.262386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.909 [2024-07-15 14:36:47.262400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.909 [2024-07-15 14:36:47.262419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.909 [2024-07-15 14:36:47.270118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.909 [2024-07-15 14:36:47.270210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.909 [2024-07-15 14:36:47.270231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.909 [2024-07-15 14:36:47.270242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.909 [2024-07-15 14:36:47.270258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.909 [2024-07-15 14:36:47.270272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.909 [2024-07-15 14:36:47.270281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.909 [2024-07-15 14:36:47.270290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.909 [2024-07-15 14:36:47.270305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.909 [2024-07-15 14:36:47.272266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.909 [2024-07-15 14:36:47.272375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.909 [2024-07-15 14:36:47.272396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.909 [2024-07-15 14:36:47.272407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.909 [2024-07-15 14:36:47.272424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.909 [2024-07-15 14:36:47.272438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.910 [2024-07-15 14:36:47.272447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.910 [2024-07-15 14:36:47.272457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.910 [2024-07-15 14:36:47.272478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.910 [2024-07-15 14:36:47.280179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.910 [2024-07-15 14:36:47.280267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.910 [2024-07-15 14:36:47.280288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.910 [2024-07-15 14:36:47.280299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.910 [2024-07-15 14:36:47.280316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.910 [2024-07-15 14:36:47.280330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.910 [2024-07-15 14:36:47.280339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.910 [2024-07-15 14:36:47.280347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.910 [2024-07-15 14:36:47.280362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.910 [2024-07-15 14:36:47.282339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.910 [2024-07-15 14:36:47.282426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.910 [2024-07-15 14:36:47.282447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.910 [2024-07-15 14:36:47.282457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.910 [2024-07-15 14:36:47.282473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.910 [2024-07-15 14:36:47.282487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.910 [2024-07-15 14:36:47.282496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.910 [2024-07-15 14:36:47.282506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.910 [2024-07-15 14:36:47.282520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.910 [2024-07-15 14:36:47.290236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.910 [2024-07-15 14:36:47.290333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.910 [2024-07-15 14:36:47.290354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.910 [2024-07-15 14:36:47.290365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.910 [2024-07-15 14:36:47.290382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.910 [2024-07-15 14:36:47.290396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.910 [2024-07-15 14:36:47.290404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.910 [2024-07-15 14:36:47.290413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.910 [2024-07-15 14:36:47.290428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.910 [2024-07-15 14:36:47.292394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.910 [2024-07-15 14:36:47.292481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.910 [2024-07-15 14:36:47.292501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.910 [2024-07-15 14:36:47.292512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.910 [2024-07-15 14:36:47.292529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.910 [2024-07-15 14:36:47.292543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.910 [2024-07-15 14:36:47.292552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.910 [2024-07-15 14:36:47.292561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.910 [2024-07-15 14:36:47.292585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.910 [2024-07-15 14:36:47.300292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.910 [2024-07-15 14:36:47.300379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.910 [2024-07-15 14:36:47.300399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.910 [2024-07-15 14:36:47.300410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.910 [2024-07-15 14:36:47.300427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.910 [2024-07-15 14:36:47.300441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.910 [2024-07-15 14:36:47.300449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.910 [2024-07-15 14:36:47.300458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.910 [2024-07-15 14:36:47.300473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.910 [2024-07-15 14:36:47.302449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.910 [2024-07-15 14:36:47.302536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.910 [2024-07-15 14:36:47.302557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.910 [2024-07-15 14:36:47.302567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.910 [2024-07-15 14:36:47.302583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.910 [2024-07-15 14:36:47.302598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.910 [2024-07-15 14:36:47.302606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.910 [2024-07-15 14:36:47.302615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.910 [2024-07-15 14:36:47.302630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.910 [2024-07-15 14:36:47.310348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.910 [2024-07-15 14:36:47.310434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.910 [2024-07-15 14:36:47.310454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.910 [2024-07-15 14:36:47.310465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.910 [2024-07-15 14:36:47.310481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.910 [2024-07-15 14:36:47.310496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.910 [2024-07-15 14:36:47.310504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.910 [2024-07-15 14:36:47.310513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.910 [2024-07-15 14:36:47.310528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.910 [2024-07-15 14:36:47.312504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.910 [2024-07-15 14:36:47.312589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.910 [2024-07-15 14:36:47.312609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.910 [2024-07-15 14:36:47.312620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.910 [2024-07-15 14:36:47.312636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.910 [2024-07-15 14:36:47.312660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.910 [2024-07-15 14:36:47.312670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.910 [2024-07-15 14:36:47.312679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.910 [2024-07-15 14:36:47.312694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.910 [2024-07-15 14:36:47.320404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.910 [2024-07-15 14:36:47.320489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.910 [2024-07-15 14:36:47.320509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.910 [2024-07-15 14:36:47.320520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.910 [2024-07-15 14:36:47.320538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.910 [2024-07-15 14:36:47.320552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.910 [2024-07-15 14:36:47.320561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.910 [2024-07-15 14:36:47.320570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.910 [2024-07-15 14:36:47.320584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.910 [2024-07-15 14:36:47.322560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.910 [2024-07-15 14:36:47.322645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.910 [2024-07-15 14:36:47.322666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.910 [2024-07-15 14:36:47.322676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.910 [2024-07-15 14:36:47.322693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.910 [2024-07-15 14:36:47.322721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.910 [2024-07-15 14:36:47.322731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.911 [2024-07-15 14:36:47.322740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.911 [2024-07-15 14:36:47.322755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.911 [2024-07-15 14:36:47.330461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.911 [2024-07-15 14:36:47.330551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.911 [2024-07-15 14:36:47.330572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.911 [2024-07-15 14:36:47.330583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.911 [2024-07-15 14:36:47.330600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.911 [2024-07-15 14:36:47.330614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.911 [2024-07-15 14:36:47.330623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.911 [2024-07-15 14:36:47.330632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.911 [2024-07-15 14:36:47.330647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.911 [2024-07-15 14:36:47.332616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.911 [2024-07-15 14:36:47.332729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.911 [2024-07-15 14:36:47.332754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.911 [2024-07-15 14:36:47.332766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.911 [2024-07-15 14:36:47.332793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.911 [2024-07-15 14:36:47.332809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.911 [2024-07-15 14:36:47.332817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.911 [2024-07-15 14:36:47.332826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.911 [2024-07-15 14:36:47.332841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.911 [2024-07-15 14:36:47.340519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.911 [2024-07-15 14:36:47.340618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.911 [2024-07-15 14:36:47.340646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.911 [2024-07-15 14:36:47.340658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.911 [2024-07-15 14:36:47.340675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.911 [2024-07-15 14:36:47.340689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.911 [2024-07-15 14:36:47.340711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.911 [2024-07-15 14:36:47.340721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.911 [2024-07-15 14:36:47.340748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.911 [2024-07-15 14:36:47.342686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.911 [2024-07-15 14:36:47.342780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.911 [2024-07-15 14:36:47.342802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.911 [2024-07-15 14:36:47.342813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.911 [2024-07-15 14:36:47.342829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.911 [2024-07-15 14:36:47.342843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.911 [2024-07-15 14:36:47.342852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.911 [2024-07-15 14:36:47.342867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.911 [2024-07-15 14:36:47.342881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.911 [2024-07-15 14:36:47.350580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.911 [2024-07-15 14:36:47.350675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.911 [2024-07-15 14:36:47.350707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.911 [2024-07-15 14:36:47.350720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.911 [2024-07-15 14:36:47.350737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.911 [2024-07-15 14:36:47.350752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.911 [2024-07-15 14:36:47.350761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.911 [2024-07-15 14:36:47.350770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.911 [2024-07-15 14:36:47.350784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.911 [2024-07-15 14:36:47.352747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.911 [2024-07-15 14:36:47.352849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.911 [2024-07-15 14:36:47.352869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.911 [2024-07-15 14:36:47.352880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.911 [2024-07-15 14:36:47.352896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.911 [2024-07-15 14:36:47.352911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.911 [2024-07-15 14:36:47.352919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.911 [2024-07-15 14:36:47.352928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.911 [2024-07-15 14:36:47.352943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.911 [2024-07-15 14:36:47.360640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.911 [2024-07-15 14:36:47.360738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.911 [2024-07-15 14:36:47.360759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.911 [2024-07-15 14:36:47.360770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.911 [2024-07-15 14:36:47.360787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.911 [2024-07-15 14:36:47.360812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.911 [2024-07-15 14:36:47.360823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.911 [2024-07-15 14:36:47.360832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.911 [2024-07-15 14:36:47.360847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.911 [2024-07-15 14:36:47.362817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.911 [2024-07-15 14:36:47.362901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.911 [2024-07-15 14:36:47.362922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.911 [2024-07-15 14:36:47.362932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.911 [2024-07-15 14:36:47.362949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.911 [2024-07-15 14:36:47.362963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.911 [2024-07-15 14:36:47.362972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.911 [2024-07-15 14:36:47.362981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.911 [2024-07-15 14:36:47.362995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.911 [2024-07-15 14:36:47.370704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.911 [2024-07-15 14:36:47.370790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.911 [2024-07-15 14:36:47.370811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.911 [2024-07-15 14:36:47.370821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.911 [2024-07-15 14:36:47.370838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.911 [2024-07-15 14:36:47.370853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.911 [2024-07-15 14:36:47.370861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.911 [2024-07-15 14:36:47.370870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.911 [2024-07-15 14:36:47.370885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.911 [2024-07-15 14:36:47.372871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.911 [2024-07-15 14:36:47.372955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.911 [2024-07-15 14:36:47.372975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.911 [2024-07-15 14:36:47.372985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.911 [2024-07-15 14:36:47.373002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.911 [2024-07-15 14:36:47.373016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.911 [2024-07-15 14:36:47.373025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.911 [2024-07-15 14:36:47.373034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.911 [2024-07-15 14:36:47.373048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.911 [2024-07-15 14:36:47.380760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.911 [2024-07-15 14:36:47.380844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.911 [2024-07-15 14:36:47.380865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.912 [2024-07-15 14:36:47.380876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.912 [2024-07-15 14:36:47.380902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.912 [2024-07-15 14:36:47.380918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.912 [2024-07-15 14:36:47.380926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.912 [2024-07-15 14:36:47.380935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.912 [2024-07-15 14:36:47.380950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.912 [2024-07-15 14:36:47.382925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.912 [2024-07-15 14:36:47.383009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.912 [2024-07-15 14:36:47.383030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.912 [2024-07-15 14:36:47.383041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.912 [2024-07-15 14:36:47.383057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.912 [2024-07-15 14:36:47.383071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.912 [2024-07-15 14:36:47.383080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.912 [2024-07-15 14:36:47.383089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.912 [2024-07-15 14:36:47.383103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.912 [2024-07-15 14:36:47.390814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.912 [2024-07-15 14:36:47.390903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.912 [2024-07-15 14:36:47.390923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.912 [2024-07-15 14:36:47.390934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.912 [2024-07-15 14:36:47.390951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.912 [2024-07-15 14:36:47.390965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.912 [2024-07-15 14:36:47.390974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.912 [2024-07-15 14:36:47.390983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.912 [2024-07-15 14:36:47.390998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.912 [2024-07-15 14:36:47.392979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.912 [2024-07-15 14:36:47.393064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.912 [2024-07-15 14:36:47.393084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.912 [2024-07-15 14:36:47.393094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.912 [2024-07-15 14:36:47.393111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.912 [2024-07-15 14:36:47.393125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.912 [2024-07-15 14:36:47.393134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.912 [2024-07-15 14:36:47.393142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.912 [2024-07-15 14:36:47.393157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.912 [2024-07-15 14:36:47.400873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.912 [2024-07-15 14:36:47.400959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.912 [2024-07-15 14:36:47.400979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.912 [2024-07-15 14:36:47.400989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.912 [2024-07-15 14:36:47.401005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.912 [2024-07-15 14:36:47.401019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.912 [2024-07-15 14:36:47.401028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.912 [2024-07-15 14:36:47.401037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.912 [2024-07-15 14:36:47.401051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.912 [2024-07-15 14:36:47.403034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.912 [2024-07-15 14:36:47.403117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.912 [2024-07-15 14:36:47.403137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.912 [2024-07-15 14:36:47.403148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.912 [2024-07-15 14:36:47.403164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.912 [2024-07-15 14:36:47.403178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.912 [2024-07-15 14:36:47.403187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.912 [2024-07-15 14:36:47.403196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.912 [2024-07-15 14:36:47.403210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.912 [2024-07-15 14:36:47.410930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.912 [2024-07-15 14:36:47.411019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.912 [2024-07-15 14:36:47.411039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.912 [2024-07-15 14:36:47.411050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.912 [2024-07-15 14:36:47.411066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.912 [2024-07-15 14:36:47.411080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.912 [2024-07-15 14:36:47.411089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.912 [2024-07-15 14:36:47.411098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.912 [2024-07-15 14:36:47.411112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.912 [2024-07-15 14:36:47.413086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.912 [2024-07-15 14:36:47.413170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.912 [2024-07-15 14:36:47.413190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.912 [2024-07-15 14:36:47.413201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.912 [2024-07-15 14:36:47.413218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.912 [2024-07-15 14:36:47.413231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.912 [2024-07-15 14:36:47.413240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.912 [2024-07-15 14:36:47.413249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.912 [2024-07-15 14:36:47.413263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.912 [2024-07-15 14:36:47.420988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.912 [2024-07-15 14:36:47.421075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.912 [2024-07-15 14:36:47.421096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.912 [2024-07-15 14:36:47.421107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.912 [2024-07-15 14:36:47.421123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.912 [2024-07-15 14:36:47.421137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.912 [2024-07-15 14:36:47.421146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.912 [2024-07-15 14:36:47.421155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.912 [2024-07-15 14:36:47.421169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.912 [2024-07-15 14:36:47.423139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.912 [2024-07-15 14:36:47.423224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.912 [2024-07-15 14:36:47.423244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.912 [2024-07-15 14:36:47.423255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.912 [2024-07-15 14:36:47.423271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.912 [2024-07-15 14:36:47.423285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.912 [2024-07-15 14:36:47.423294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.912 [2024-07-15 14:36:47.423303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.912 [2024-07-15 14:36:47.423317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.912 [2024-07-15 14:36:47.431048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.912 [2024-07-15 14:36:47.431146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.912 [2024-07-15 14:36:47.431168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.912 [2024-07-15 14:36:47.431178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.912 [2024-07-15 14:36:47.431195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.912 [2024-07-15 14:36:47.431210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.912 [2024-07-15 14:36:47.431219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.912 [2024-07-15 14:36:47.431228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.912 [2024-07-15 14:36:47.431243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.912 [2024-07-15 14:36:47.433195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.912 [2024-07-15 14:36:47.433286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.913 [2024-07-15 14:36:47.433307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.913 [2024-07-15 14:36:47.433318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.913 [2024-07-15 14:36:47.433335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.913 [2024-07-15 14:36:47.433349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.913 [2024-07-15 14:36:47.433358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.913 [2024-07-15 14:36:47.433367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.913 [2024-07-15 14:36:47.433381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.913 [2024-07-15 14:36:47.441120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.913 [2024-07-15 14:36:47.441257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.913 [2024-07-15 14:36:47.441280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.913 [2024-07-15 14:36:47.441292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.913 [2024-07-15 14:36:47.441310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.913 [2024-07-15 14:36:47.441325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.913 [2024-07-15 14:36:47.441334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.913 [2024-07-15 14:36:47.441344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.913 [2024-07-15 14:36:47.441359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.913 [2024-07-15 14:36:47.443253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.913 [2024-07-15 14:36:47.443343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.913 [2024-07-15 14:36:47.443365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.913 [2024-07-15 14:36:47.443376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.913 [2024-07-15 14:36:47.443392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.913 [2024-07-15 14:36:47.443407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.913 [2024-07-15 14:36:47.443416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.913 [2024-07-15 14:36:47.443425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.913 [2024-07-15 14:36:47.443439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.913 [2024-07-15 14:36:47.451193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.913 [2024-07-15 14:36:47.451280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.913 [2024-07-15 14:36:47.451301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.913 [2024-07-15 14:36:47.451312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.913 [2024-07-15 14:36:47.451329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.913 [2024-07-15 14:36:47.451343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.913 [2024-07-15 14:36:47.451352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.913 [2024-07-15 14:36:47.451361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.913 [2024-07-15 14:36:47.451376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.913 [2024-07-15 14:36:47.453309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.913 [2024-07-15 14:36:47.453395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.913 [2024-07-15 14:36:47.453415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.913 [2024-07-15 14:36:47.453426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.913 [2024-07-15 14:36:47.453442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.913 [2024-07-15 14:36:47.453456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.913 [2024-07-15 14:36:47.453465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.913 [2024-07-15 14:36:47.453473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.913 [2024-07-15 14:36:47.453488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.913 [2024-07-15 14:36:47.461249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.913 [2024-07-15 14:36:47.461344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.913 [2024-07-15 14:36:47.461366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.913 [2024-07-15 14:36:47.461377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.913 [2024-07-15 14:36:47.461393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.913 [2024-07-15 14:36:47.461407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.913 [2024-07-15 14:36:47.461416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.913 [2024-07-15 14:36:47.461425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.913 [2024-07-15 14:36:47.461440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.913 [2024-07-15 14:36:47.463364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.913 [2024-07-15 14:36:47.463449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.913 [2024-07-15 14:36:47.463471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.913 [2024-07-15 14:36:47.463481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.913 [2024-07-15 14:36:47.463497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.913 [2024-07-15 14:36:47.463511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.913 [2024-07-15 14:36:47.463520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.913 [2024-07-15 14:36:47.463529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.913 [2024-07-15 14:36:47.463543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.913 [2024-07-15 14:36:47.471310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.913 [2024-07-15 14:36:47.471396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.913 [2024-07-15 14:36:47.471416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.913 [2024-07-15 14:36:47.471427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.913 [2024-07-15 14:36:47.471443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.913 [2024-07-15 14:36:47.471458] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.913 [2024-07-15 14:36:47.471466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.913 [2024-07-15 14:36:47.471475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.913 [2024-07-15 14:36:47.471489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.913 [2024-07-15 14:36:47.473418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.913 [2024-07-15 14:36:47.473502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.913 [2024-07-15 14:36:47.473523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.913 [2024-07-15 14:36:47.473533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.913 [2024-07-15 14:36:47.473549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.913 [2024-07-15 14:36:47.473564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.913 [2024-07-15 14:36:47.473572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.914 [2024-07-15 14:36:47.473581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.914 [2024-07-15 14:36:47.473595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.914 [2024-07-15 14:36:47.481366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.914 [2024-07-15 14:36:47.481457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.914 [2024-07-15 14:36:47.481477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.914 [2024-07-15 14:36:47.481487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.914 [2024-07-15 14:36:47.481504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.914 [2024-07-15 14:36:47.481518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.914 [2024-07-15 14:36:47.481527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.914 [2024-07-15 14:36:47.481536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.914 [2024-07-15 14:36:47.481551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.914 [2024-07-15 14:36:47.483472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.914 [2024-07-15 14:36:47.483573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.914 [2024-07-15 14:36:47.483594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.914 [2024-07-15 14:36:47.483604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.914 [2024-07-15 14:36:47.483620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.914 [2024-07-15 14:36:47.483635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.914 [2024-07-15 14:36:47.483644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.914 [2024-07-15 14:36:47.483653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.914 [2024-07-15 14:36:47.483667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.914 [2024-07-15 14:36:47.491438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:07.914 [2024-07-15 14:36:47.491540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.914 [2024-07-15 14:36:47.491561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:07.914 [2024-07-15 14:36:47.491571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:07.914 [2024-07-15 14:36:47.491587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:07.914 [2024-07-15 14:36:47.491602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:07.914 [2024-07-15 14:36:47.491611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:07.914 [2024-07-15 14:36:47.491620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:07.914 [2024-07-15 14:36:47.491634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.914 [2024-07-15 14:36:47.493541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:07.914 [2024-07-15 14:36:47.493625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.914 [2024-07-15 14:36:47.493645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:07.914 [2024-07-15 14:36:47.493655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:07.914 [2024-07-15 14:36:47.493671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:07.914 [2024-07-15 14:36:47.493685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:07.914 [2024-07-15 14:36:47.493694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:07.914 [2024-07-15 14:36:47.493717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:07.914 [2024-07-15 14:36:47.493732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.175 [2024-07-15 14:36:47.501510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.175 [2024-07-15 14:36:47.501596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.175 [2024-07-15 14:36:47.501616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.175 [2024-07-15 14:36:47.501627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.175 [2024-07-15 14:36:47.501644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.175 [2024-07-15 14:36:47.501658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.175 [2024-07-15 14:36:47.501667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.175 [2024-07-15 14:36:47.501675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.175 [2024-07-15 14:36:47.501690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.175 [2024-07-15 14:36:47.503593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.176 [2024-07-15 14:36:47.503705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.176 [2024-07-15 14:36:47.503737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.176 [2024-07-15 14:36:47.503748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.176 [2024-07-15 14:36:47.503765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.176 [2024-07-15 14:36:47.503778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.176 [2024-07-15 14:36:47.503787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.176 [2024-07-15 14:36:47.503796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.176 [2024-07-15 14:36:47.503810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.176 [2024-07-15 14:36:47.511565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.176 [2024-07-15 14:36:47.511666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.176 [2024-07-15 14:36:47.511687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.176 [2024-07-15 14:36:47.511709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.176 [2024-07-15 14:36:47.511727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.176 [2024-07-15 14:36:47.511742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.176 [2024-07-15 14:36:47.511751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.176 [2024-07-15 14:36:47.511760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.176 [2024-07-15 14:36:47.511775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.176 [2024-07-15 14:36:47.513659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.176 [2024-07-15 14:36:47.513784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.176 [2024-07-15 14:36:47.513805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.176 [2024-07-15 14:36:47.513816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.176 [2024-07-15 14:36:47.513832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.176 [2024-07-15 14:36:47.513847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.176 [2024-07-15 14:36:47.513855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.176 [2024-07-15 14:36:47.513864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.176 [2024-07-15 14:36:47.513878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.176 [2024-07-15 14:36:47.521635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.176 [2024-07-15 14:36:47.521759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.176 [2024-07-15 14:36:47.521781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.176 [2024-07-15 14:36:47.521792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.176 [2024-07-15 14:36:47.521809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.176 [2024-07-15 14:36:47.521824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.176 [2024-07-15 14:36:47.521832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.176 [2024-07-15 14:36:47.521841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.176 [2024-07-15 14:36:47.521856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.176 [2024-07-15 14:36:47.523752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.176 [2024-07-15 14:36:47.523867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.176 [2024-07-15 14:36:47.523888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.176 [2024-07-15 14:36:47.523899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.176 [2024-07-15 14:36:47.523915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.176 [2024-07-15 14:36:47.523929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.176 [2024-07-15 14:36:47.523938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.176 [2024-07-15 14:36:47.523947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.176 [2024-07-15 14:36:47.523961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.176 [2024-07-15 14:36:47.531704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.176 [2024-07-15 14:36:47.531827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.176 [2024-07-15 14:36:47.531847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.176 [2024-07-15 14:36:47.531858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.176 [2024-07-15 14:36:47.531885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.176 [2024-07-15 14:36:47.531901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.176 [2024-07-15 14:36:47.531910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.176 [2024-07-15 14:36:47.531918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.176 [2024-07-15 14:36:47.531933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.176 [2024-07-15 14:36:47.533822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.176 [2024-07-15 14:36:47.533926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.176 [2024-07-15 14:36:47.533947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.176 [2024-07-15 14:36:47.533958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.176 [2024-07-15 14:36:47.533974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.176 [2024-07-15 14:36:47.533988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.176 [2024-07-15 14:36:47.533997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.176 [2024-07-15 14:36:47.534006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.176 [2024-07-15 14:36:47.534021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.176 [2024-07-15 14:36:47.541794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.176 [2024-07-15 14:36:47.541899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.176 [2024-07-15 14:36:47.541920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.176 [2024-07-15 14:36:47.541931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.176 [2024-07-15 14:36:47.541948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.176 [2024-07-15 14:36:47.541962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.176 [2024-07-15 14:36:47.541971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.176 [2024-07-15 14:36:47.541980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.176 [2024-07-15 14:36:47.541995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.176 [2024-07-15 14:36:47.543895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.176 [2024-07-15 14:36:47.544014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.176 [2024-07-15 14:36:47.544036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.176 [2024-07-15 14:36:47.544046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.176 [2024-07-15 14:36:47.544063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.176 [2024-07-15 14:36:47.544077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.176 [2024-07-15 14:36:47.544086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.176 [2024-07-15 14:36:47.544095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.176 [2024-07-15 14:36:47.544109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.176 [2024-07-15 14:36:47.551865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.176 [2024-07-15 14:36:47.551951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.176 [2024-07-15 14:36:47.551972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.176 [2024-07-15 14:36:47.551983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.176 [2024-07-15 14:36:47.551999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.176 [2024-07-15 14:36:47.552013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.177 [2024-07-15 14:36:47.552022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.177 [2024-07-15 14:36:47.552031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.177 [2024-07-15 14:36:47.552045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.177 [2024-07-15 14:36:47.553970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.177 [2024-07-15 14:36:47.554053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.177 [2024-07-15 14:36:47.554073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.177 [2024-07-15 14:36:47.554084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.177 [2024-07-15 14:36:47.554100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.177 [2024-07-15 14:36:47.554114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.177 [2024-07-15 14:36:47.554123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.177 [2024-07-15 14:36:47.554132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.177 [2024-07-15 14:36:47.554146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.177 [2024-07-15 14:36:47.561922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.177 [2024-07-15 14:36:47.562009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.177 [2024-07-15 14:36:47.562030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.177 [2024-07-15 14:36:47.562040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.177 [2024-07-15 14:36:47.562056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.177 [2024-07-15 14:36:47.562070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.177 [2024-07-15 14:36:47.562079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.177 [2024-07-15 14:36:47.562088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.177 [2024-07-15 14:36:47.562102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.177 [2024-07-15 14:36:47.564023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.177 [2024-07-15 14:36:47.564152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.177 [2024-07-15 14:36:47.564173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.177 [2024-07-15 14:36:47.564184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.177 [2024-07-15 14:36:47.564200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.177 [2024-07-15 14:36:47.564214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.177 [2024-07-15 14:36:47.564223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.177 [2024-07-15 14:36:47.564232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.177 [2024-07-15 14:36:47.564246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.177 [2024-07-15 14:36:47.571978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.177 [2024-07-15 14:36:47.572094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.177 [2024-07-15 14:36:47.572115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.177 [2024-07-15 14:36:47.572125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.177 [2024-07-15 14:36:47.572141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.177 [2024-07-15 14:36:47.572155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.177 [2024-07-15 14:36:47.572164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.177 [2024-07-15 14:36:47.572173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.177 [2024-07-15 14:36:47.572188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.177 [2024-07-15 14:36:47.574106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.177 [2024-07-15 14:36:47.574189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.177 [2024-07-15 14:36:47.574209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.177 [2024-07-15 14:36:47.574219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.177 [2024-07-15 14:36:47.574235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.177 [2024-07-15 14:36:47.574249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.177 [2024-07-15 14:36:47.574258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.177 [2024-07-15 14:36:47.574267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.177 [2024-07-15 14:36:47.574282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.177 [2024-07-15 14:36:47.582048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.177 [2024-07-15 14:36:47.582134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.177 [2024-07-15 14:36:47.582155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.177 [2024-07-15 14:36:47.582166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.177 [2024-07-15 14:36:47.582182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.177 [2024-07-15 14:36:47.582197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.177 [2024-07-15 14:36:47.582205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.177 [2024-07-15 14:36:47.582214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.177 [2024-07-15 14:36:47.582228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.177 [2024-07-15 14:36:47.584158] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.177 [2024-07-15 14:36:47.584271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.177 [2024-07-15 14:36:47.584292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.177 [2024-07-15 14:36:47.584303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.177 [2024-07-15 14:36:47.584319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.177 [2024-07-15 14:36:47.584333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.177 [2024-07-15 14:36:47.584342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.177 [2024-07-15 14:36:47.584351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.177 [2024-07-15 14:36:47.584365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.177 [2024-07-15 14:36:47.592103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.177 [2024-07-15 14:36:47.592221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.177 [2024-07-15 14:36:47.592242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.177 [2024-07-15 14:36:47.592252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.177 [2024-07-15 14:36:47.592269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.177 [2024-07-15 14:36:47.592284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.177 [2024-07-15 14:36:47.592292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.177 [2024-07-15 14:36:47.592301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.177 [2024-07-15 14:36:47.592315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.177 [2024-07-15 14:36:47.594242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.177 [2024-07-15 14:36:47.594354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.177 [2024-07-15 14:36:47.594376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.177 [2024-07-15 14:36:47.594387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.177 [2024-07-15 14:36:47.594404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.177 [2024-07-15 14:36:47.594418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.177 [2024-07-15 14:36:47.594427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.177 [2024-07-15 14:36:47.594436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.177 [2024-07-15 14:36:47.594451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.177 [2024-07-15 14:36:47.602175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.177 [2024-07-15 14:36:47.602259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.177 [2024-07-15 14:36:47.602280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.177 [2024-07-15 14:36:47.602290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.177 [2024-07-15 14:36:47.602307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.178 [2024-07-15 14:36:47.602332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.178 [2024-07-15 14:36:47.602343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.178 [2024-07-15 14:36:47.602352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.178 [2024-07-15 14:36:47.602367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.178 [2024-07-15 14:36:47.604312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.178 [2024-07-15 14:36:47.604410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.178 [2024-07-15 14:36:47.604431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.178 [2024-07-15 14:36:47.604441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.178 [2024-07-15 14:36:47.604457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.178 [2024-07-15 14:36:47.604471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.178 [2024-07-15 14:36:47.604480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.178 [2024-07-15 14:36:47.604489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.178 [2024-07-15 14:36:47.604503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.178 [2024-07-15 14:36:47.612231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.178 [2024-07-15 14:36:47.612317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.178 [2024-07-15 14:36:47.612338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.178 [2024-07-15 14:36:47.612348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.178 [2024-07-15 14:36:47.612364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.178 [2024-07-15 14:36:47.612379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.178 [2024-07-15 14:36:47.612387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.178 [2024-07-15 14:36:47.612396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.178 [2024-07-15 14:36:47.612411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.178 [2024-07-15 14:36:47.614381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.178 [2024-07-15 14:36:47.614465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.178 [2024-07-15 14:36:47.614486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.178 [2024-07-15 14:36:47.614496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.178 [2024-07-15 14:36:47.614512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.178 [2024-07-15 14:36:47.614527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.178 [2024-07-15 14:36:47.614536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.178 [2024-07-15 14:36:47.614545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.178 [2024-07-15 14:36:47.614559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.178 [2024-07-15 14:36:47.622287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.178 [2024-07-15 14:36:47.622380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.178 [2024-07-15 14:36:47.622402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.178 [2024-07-15 14:36:47.622413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.178 [2024-07-15 14:36:47.622429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.178 [2024-07-15 14:36:47.622443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.178 [2024-07-15 14:36:47.622452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.178 [2024-07-15 14:36:47.622461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.178 [2024-07-15 14:36:47.622475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.178 [2024-07-15 14:36:47.624439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.178 [2024-07-15 14:36:47.624524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.178 [2024-07-15 14:36:47.624545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.178 [2024-07-15 14:36:47.624556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.178 [2024-07-15 14:36:47.624573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.178 [2024-07-15 14:36:47.624587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.178 [2024-07-15 14:36:47.624595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.178 [2024-07-15 14:36:47.624604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.178 [2024-07-15 14:36:47.624619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.178 [2024-07-15 14:36:47.632350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.178 [2024-07-15 14:36:47.632454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.178 [2024-07-15 14:36:47.632475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.178 [2024-07-15 14:36:47.632485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.178 [2024-07-15 14:36:47.632501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.178 [2024-07-15 14:36:47.632515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.178 [2024-07-15 14:36:47.632524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.178 [2024-07-15 14:36:47.632533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.178 [2024-07-15 14:36:47.632548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.178 [2024-07-15 14:36:47.634494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.178 [2024-07-15 14:36:47.634581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.178 [2024-07-15 14:36:47.634602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.178 [2024-07-15 14:36:47.634612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.178 [2024-07-15 14:36:47.634628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.178 [2024-07-15 14:36:47.634643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.178 [2024-07-15 14:36:47.634651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.178 [2024-07-15 14:36:47.634660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.178 [2024-07-15 14:36:47.634675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.178 [2024-07-15 14:36:47.642421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.178 [2024-07-15 14:36:47.642507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.178 [2024-07-15 14:36:47.642528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.178 [2024-07-15 14:36:47.642539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.178 [2024-07-15 14:36:47.642556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.178 [2024-07-15 14:36:47.642570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.178 [2024-07-15 14:36:47.642578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.178 [2024-07-15 14:36:47.642587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.178 [2024-07-15 14:36:47.642602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.178 [2024-07-15 14:36:47.644551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.178 [2024-07-15 14:36:47.644635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.178 [2024-07-15 14:36:47.644656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.178 [2024-07-15 14:36:47.644666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.178 [2024-07-15 14:36:47.644682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.178 [2024-07-15 14:36:47.644708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.178 [2024-07-15 14:36:47.644719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.178 [2024-07-15 14:36:47.644729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.178 [2024-07-15 14:36:47.644743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.178 [2024-07-15 14:36:47.652478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.178 [2024-07-15 14:36:47.652573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.178 [2024-07-15 14:36:47.652594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.178 [2024-07-15 14:36:47.652605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.178 [2024-07-15 14:36:47.652621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.178 [2024-07-15 14:36:47.652635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.178 [2024-07-15 14:36:47.652644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.178 [2024-07-15 14:36:47.652653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.178 [2024-07-15 14:36:47.652667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.178 [2024-07-15 14:36:47.654606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.178 [2024-07-15 14:36:47.654693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.178 [2024-07-15 14:36:47.654731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.179 [2024-07-15 14:36:47.654742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.179 [2024-07-15 14:36:47.654759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.179 [2024-07-15 14:36:47.654774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.179 [2024-07-15 14:36:47.654782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.179 [2024-07-15 14:36:47.654791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.179 [2024-07-15 14:36:47.654806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.179 [2024-07-15 14:36:47.662537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.179 [2024-07-15 14:36:47.662629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.179 [2024-07-15 14:36:47.662650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.179 [2024-07-15 14:36:47.662661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.179 [2024-07-15 14:36:47.662677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.179 [2024-07-15 14:36:47.662691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.179 [2024-07-15 14:36:47.662716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.179 [2024-07-15 14:36:47.662726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.179 [2024-07-15 14:36:47.662742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.179 [2024-07-15 14:36:47.664662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.179 [2024-07-15 14:36:47.664755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.179 [2024-07-15 14:36:47.664776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.179 [2024-07-15 14:36:47.664787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.179 [2024-07-15 14:36:47.664803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.179 [2024-07-15 14:36:47.664817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.179 [2024-07-15 14:36:47.664826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.179 [2024-07-15 14:36:47.664836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.179 [2024-07-15 14:36:47.664850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.179 [2024-07-15 14:36:47.672595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.179 [2024-07-15 14:36:47.672680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.179 [2024-07-15 14:36:47.672712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.179 [2024-07-15 14:36:47.672724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.179 [2024-07-15 14:36:47.672741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.179 [2024-07-15 14:36:47.672756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.179 [2024-07-15 14:36:47.672765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.179 [2024-07-15 14:36:47.672774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.179 [2024-07-15 14:36:47.672789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.179 [2024-07-15 14:36:47.674740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.179 [2024-07-15 14:36:47.674839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.179 [2024-07-15 14:36:47.674859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.179 [2024-07-15 14:36:47.674870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.179 [2024-07-15 14:36:47.674886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.179 [2024-07-15 14:36:47.674900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.179 [2024-07-15 14:36:47.674910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.179 [2024-07-15 14:36:47.674919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.179 [2024-07-15 14:36:47.674933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.179 [2024-07-15 14:36:47.682652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.179 [2024-07-15 14:36:47.682748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.179 [2024-07-15 14:36:47.682770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.179 [2024-07-15 14:36:47.682781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.179 [2024-07-15 14:36:47.682797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.179 [2024-07-15 14:36:47.682812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.179 [2024-07-15 14:36:47.682821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.179 [2024-07-15 14:36:47.682830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.179 [2024-07-15 14:36:47.682845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.179 [2024-07-15 14:36:47.684808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.179 [2024-07-15 14:36:47.684907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.179 [2024-07-15 14:36:47.684927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.179 [2024-07-15 14:36:47.684938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.179 [2024-07-15 14:36:47.684954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.179 [2024-07-15 14:36:47.684978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.179 [2024-07-15 14:36:47.684989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.179 [2024-07-15 14:36:47.684998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.179 [2024-07-15 14:36:47.685012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.179 [2024-07-15 14:36:47.692711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.179 [2024-07-15 14:36:47.692834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.179 [2024-07-15 14:36:47.692855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.179 [2024-07-15 14:36:47.692865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.179 [2024-07-15 14:36:47.692882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.179 [2024-07-15 14:36:47.692896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.179 [2024-07-15 14:36:47.692905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.179 [2024-07-15 14:36:47.692913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.179 [2024-07-15 14:36:47.692928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.179 [2024-07-15 14:36:47.694878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.179 [2024-07-15 14:36:47.694978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.179 [2024-07-15 14:36:47.694999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.179 [2024-07-15 14:36:47.695010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.179 [2024-07-15 14:36:47.695026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.179 [2024-07-15 14:36:47.695040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.179 [2024-07-15 14:36:47.695049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.179 [2024-07-15 14:36:47.695058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.179 [2024-07-15 14:36:47.695072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.179 [2024-07-15 14:36:47.695584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.179 [2024-07-15 14:36:47.695621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cf7e0 with addr=10.0.0.3, port=8009 00:20:08.179 [2024-07-15 14:36:47.695640] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:08.179 [2024-07-15 14:36:47.695650] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:08.179 [2024-07-15 14:36:47.695660] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.3:8009] could not start discovery connect 00:20:08.179 [2024-07-15 14:36:47.695771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.179 [2024-07-15 14:36:47.695802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16cf9c0 with addr=10.0.0.2, port=8009 00:20:08.179 [2024-07-15 14:36:47.695821] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:08.179 [2024-07-15 14:36:47.695831] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:08.180 [2024-07-15 14:36:47.695839] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8009] could not start discovery connect 00:20:08.180 [2024-07-15 14:36:47.702807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.180 [2024-07-15 14:36:47.702897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.180 [2024-07-15 14:36:47.702918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.180 [2024-07-15 14:36:47.702928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.180 [2024-07-15 14:36:47.702946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.180 [2024-07-15 14:36:47.702960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.180 [2024-07-15 14:36:47.702969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.180 [2024-07-15 14:36:47.702978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.180 [2024-07-15 14:36:47.702993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.180 [2024-07-15 14:36:47.704948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.180 [2024-07-15 14:36:47.705033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.180 [2024-07-15 14:36:47.705054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.180 [2024-07-15 14:36:47.705064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.180 [2024-07-15 14:36:47.705090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.180 [2024-07-15 14:36:47.705106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.180 [2024-07-15 14:36:47.705115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.180 [2024-07-15 14:36:47.705124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.180 [2024-07-15 14:36:47.705138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.180 [2024-07-15 14:36:47.712864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.180 [2024-07-15 14:36:47.712950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.180 [2024-07-15 14:36:47.712971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.180 [2024-07-15 14:36:47.712982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.180 [2024-07-15 14:36:47.712998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.180 [2024-07-15 14:36:47.713023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.180 [2024-07-15 14:36:47.713033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.180 [2024-07-15 14:36:47.713042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.180 [2024-07-15 14:36:47.713057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.180 [2024-07-15 14:36:47.715004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.180 [2024-07-15 14:36:47.715106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.180 [2024-07-15 14:36:47.715127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.180 [2024-07-15 14:36:47.715138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.180 [2024-07-15 14:36:47.715154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.180 [2024-07-15 14:36:47.715169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.180 [2024-07-15 14:36:47.715178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.180 [2024-07-15 14:36:47.715187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.180 [2024-07-15 14:36:47.715201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.180 [2024-07-15 14:36:47.722920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.180 [2024-07-15 14:36:47.723019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.180 [2024-07-15 14:36:47.723039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.180 [2024-07-15 14:36:47.723050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.180 [2024-07-15 14:36:47.723066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.180 [2024-07-15 14:36:47.723081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.180 [2024-07-15 14:36:47.723089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.180 [2024-07-15 14:36:47.723098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.180 [2024-07-15 14:36:47.723113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.180 [2024-07-15 14:36:47.725076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.180 [2024-07-15 14:36:47.725161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.180 [2024-07-15 14:36:47.725181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.180 [2024-07-15 14:36:47.725192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.180 [2024-07-15 14:36:47.725209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.180 [2024-07-15 14:36:47.725223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.180 [2024-07-15 14:36:47.725232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.180 [2024-07-15 14:36:47.725241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.180 [2024-07-15 14:36:47.725256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.180 [2024-07-15 14:36:47.732992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.180 [2024-07-15 14:36:47.733084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.180 [2024-07-15 14:36:47.733105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.180 [2024-07-15 14:36:47.733116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.180 [2024-07-15 14:36:47.733142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.180 [2024-07-15 14:36:47.733158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.180 [2024-07-15 14:36:47.733167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.180 [2024-07-15 14:36:47.733176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.180 [2024-07-15 14:36:47.733191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.180 [2024-07-15 14:36:47.735131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.180 [2024-07-15 14:36:47.735247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.180 [2024-07-15 14:36:47.735268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.180 [2024-07-15 14:36:47.735278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.180 [2024-07-15 14:36:47.735295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.180 [2024-07-15 14:36:47.735309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.180 [2024-07-15 14:36:47.735318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.180 [2024-07-15 14:36:47.735326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.180 [2024-07-15 14:36:47.735341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.181 [2024-07-15 14:36:47.743051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.181 [2024-07-15 14:36:47.743167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.181 [2024-07-15 14:36:47.743187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.181 [2024-07-15 14:36:47.743199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.181 [2024-07-15 14:36:47.743215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.181 [2024-07-15 14:36:47.743229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.181 [2024-07-15 14:36:47.743238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.181 [2024-07-15 14:36:47.743247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.181 [2024-07-15 14:36:47.743261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.181 [2024-07-15 14:36:47.745199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.181 [2024-07-15 14:36:47.745301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.181 [2024-07-15 14:36:47.745321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.181 [2024-07-15 14:36:47.745332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.181 [2024-07-15 14:36:47.745348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.181 [2024-07-15 14:36:47.745362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.181 [2024-07-15 14:36:47.745371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.181 [2024-07-15 14:36:47.745380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.181 [2024-07-15 14:36:47.745395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.181 [2024-07-15 14:36:47.753138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.181 [2024-07-15 14:36:47.753232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.181 [2024-07-15 14:36:47.753254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.181 [2024-07-15 14:36:47.753264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.181 [2024-07-15 14:36:47.753281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.181 [2024-07-15 14:36:47.753295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.181 [2024-07-15 14:36:47.753303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.181 [2024-07-15 14:36:47.753312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.181 [2024-07-15 14:36:47.753328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.181 [2024-07-15 14:36:47.755270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.181 [2024-07-15 14:36:47.755357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.181 [2024-07-15 14:36:47.755378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.181 [2024-07-15 14:36:47.755389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.181 [2024-07-15 14:36:47.755405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.181 [2024-07-15 14:36:47.755419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.181 [2024-07-15 14:36:47.755428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.181 [2024-07-15 14:36:47.755437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.181 [2024-07-15 14:36:47.755452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.181 [2024-07-15 14:36:47.763197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.181 [2024-07-15 14:36:47.763300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.181 [2024-07-15 14:36:47.763320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.181 [2024-07-15 14:36:47.763331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.181 [2024-07-15 14:36:47.763347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.181 [2024-07-15 14:36:47.763361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.181 [2024-07-15 14:36:47.763370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.181 [2024-07-15 14:36:47.763379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.181 [2024-07-15 14:36:47.763394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.181 [2024-07-15 14:36:47.765325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.181 [2024-07-15 14:36:47.765409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.181 [2024-07-15 14:36:47.765430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.181 [2024-07-15 14:36:47.765440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.181 [2024-07-15 14:36:47.765456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.181 [2024-07-15 14:36:47.765470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.181 [2024-07-15 14:36:47.765479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.181 [2024-07-15 14:36:47.765488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.181 [2024-07-15 14:36:47.765503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.442 [2024-07-15 14:36:47.773271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.442 [2024-07-15 14:36:47.773364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.442 [2024-07-15 14:36:47.773385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.442 [2024-07-15 14:36:47.773396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.442 [2024-07-15 14:36:47.773413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.442 [2024-07-15 14:36:47.773427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.442 [2024-07-15 14:36:47.773436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.442 [2024-07-15 14:36:47.773445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.442 [2024-07-15 14:36:47.773460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.442 [2024-07-15 14:36:47.775381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.442 [2024-07-15 14:36:47.775468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.442 [2024-07-15 14:36:47.775488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.442 [2024-07-15 14:36:47.775500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.442 [2024-07-15 14:36:47.775516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.442 [2024-07-15 14:36:47.775530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.442 [2024-07-15 14:36:47.775539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.442 [2024-07-15 14:36:47.775549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.442 [2024-07-15 14:36:47.775563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.442 [2024-07-15 14:36:47.783330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.442 [2024-07-15 14:36:47.783415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.442 [2024-07-15 14:36:47.783435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.442 [2024-07-15 14:36:47.783446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.442 [2024-07-15 14:36:47.783462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.442 [2024-07-15 14:36:47.783476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.442 [2024-07-15 14:36:47.783485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.442 [2024-07-15 14:36:47.783494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.442 [2024-07-15 14:36:47.783509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.442 [2024-07-15 14:36:47.785437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.442 [2024-07-15 14:36:47.785521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.442 [2024-07-15 14:36:47.785541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.442 [2024-07-15 14:36:47.785551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.442 [2024-07-15 14:36:47.785567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.442 [2024-07-15 14:36:47.785582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.442 [2024-07-15 14:36:47.785591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.442 [2024-07-15 14:36:47.785600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.442 [2024-07-15 14:36:47.785614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.442 [2024-07-15 14:36:47.793386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.442 [2024-07-15 14:36:47.793474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.442 [2024-07-15 14:36:47.793495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.442 [2024-07-15 14:36:47.793506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.443 [2024-07-15 14:36:47.793522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.443 [2024-07-15 14:36:47.793536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.443 [2024-07-15 14:36:47.793545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.443 [2024-07-15 14:36:47.793555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.443 [2024-07-15 14:36:47.793569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.443 [2024-07-15 14:36:47.795492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.443 [2024-07-15 14:36:47.795578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.443 [2024-07-15 14:36:47.795599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.443 [2024-07-15 14:36:47.795609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.443 [2024-07-15 14:36:47.795626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.443 [2024-07-15 14:36:47.795640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.443 [2024-07-15 14:36:47.795648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.443 [2024-07-15 14:36:47.795657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.443 [2024-07-15 14:36:47.795672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.443 [2024-07-15 14:36:47.803443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.443 [2024-07-15 14:36:47.803544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.443 [2024-07-15 14:36:47.803565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.443 [2024-07-15 14:36:47.803575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.443 [2024-07-15 14:36:47.803591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.443 [2024-07-15 14:36:47.803606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.443 [2024-07-15 14:36:47.803614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.443 [2024-07-15 14:36:47.803623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.443 [2024-07-15 14:36:47.803637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.443 [2024-07-15 14:36:47.805546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.443 [2024-07-15 14:36:47.805645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.443 [2024-07-15 14:36:47.805665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.443 [2024-07-15 14:36:47.805675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.443 [2024-07-15 14:36:47.805691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.443 [2024-07-15 14:36:47.805718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.443 [2024-07-15 14:36:47.805729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.443 [2024-07-15 14:36:47.805738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.443 [2024-07-15 14:36:47.805753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.443 [2024-07-15 14:36:47.813513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.443 [2024-07-15 14:36:47.813599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.443 [2024-07-15 14:36:47.813620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.443 [2024-07-15 14:36:47.813630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.443 [2024-07-15 14:36:47.813646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.443 [2024-07-15 14:36:47.813660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.443 [2024-07-15 14:36:47.813669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.443 [2024-07-15 14:36:47.813678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.443 [2024-07-15 14:36:47.813692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.443 [2024-07-15 14:36:47.815616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.443 [2024-07-15 14:36:47.815718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.443 [2024-07-15 14:36:47.815740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.443 [2024-07-15 14:36:47.815751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.443 [2024-07-15 14:36:47.815768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.443 [2024-07-15 14:36:47.815782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.443 [2024-07-15 14:36:47.815791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.443 [2024-07-15 14:36:47.815800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.443 [2024-07-15 14:36:47.815815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.443 [2024-07-15 14:36:47.823568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.443 [2024-07-15 14:36:47.823651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.443 [2024-07-15 14:36:47.823671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.443 [2024-07-15 14:36:47.823681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.443 [2024-07-15 14:36:47.823724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.443 [2024-07-15 14:36:47.823742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.443 [2024-07-15 14:36:47.823751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.443 [2024-07-15 14:36:47.823760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.443 [2024-07-15 14:36:47.823774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.443 [2024-07-15 14:36:47.825671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.443 [2024-07-15 14:36:47.825780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.443 [2024-07-15 14:36:47.825800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.443 [2024-07-15 14:36:47.825810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.443 [2024-07-15 14:36:47.825827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.443 [2024-07-15 14:36:47.825841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.443 [2024-07-15 14:36:47.825850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.443 [2024-07-15 14:36:47.825859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.443 [2024-07-15 14:36:47.825873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.443 [2024-07-15 14:36:47.833626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.443 [2024-07-15 14:36:47.833739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.443 [2024-07-15 14:36:47.833762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.443 [2024-07-15 14:36:47.833790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.443 [2024-07-15 14:36:47.833811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.443 [2024-07-15 14:36:47.833826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.443 [2024-07-15 14:36:47.833834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.443 [2024-07-15 14:36:47.833844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.443 [2024-07-15 14:36:47.833859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.443 [2024-07-15 14:36:47.835755] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.443 [2024-07-15 14:36:47.835855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.443 [2024-07-15 14:36:47.835877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.443 [2024-07-15 14:36:47.835888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.443 [2024-07-15 14:36:47.835904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.443 [2024-07-15 14:36:47.835919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.443 [2024-07-15 14:36:47.835928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.443 [2024-07-15 14:36:47.835937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.443 [2024-07-15 14:36:47.835951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.443 [2024-07-15 14:36:47.843688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.443 [2024-07-15 14:36:47.843817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.443 [2024-07-15 14:36:47.843839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.443 [2024-07-15 14:36:47.843849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.443 [2024-07-15 14:36:47.843865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.443 [2024-07-15 14:36:47.843879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.443 [2024-07-15 14:36:47.843888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.443 [2024-07-15 14:36:47.843898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.443 [2024-07-15 14:36:47.843912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.443 [2024-07-15 14:36:47.845822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.443 [2024-07-15 14:36:47.845908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.443 [2024-07-15 14:36:47.845928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.444 [2024-07-15 14:36:47.845938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.444 [2024-07-15 14:36:47.845955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.444 [2024-07-15 14:36:47.845969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.444 [2024-07-15 14:36:47.845978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.444 [2024-07-15 14:36:47.845987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.444 [2024-07-15 14:36:47.846001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.444 [2024-07-15 14:36:47.853786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.444 [2024-07-15 14:36:47.853873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.444 [2024-07-15 14:36:47.853893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.444 [2024-07-15 14:36:47.853903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.444 [2024-07-15 14:36:47.853919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.444 [2024-07-15 14:36:47.853934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.444 [2024-07-15 14:36:47.853943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.444 [2024-07-15 14:36:47.853951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.444 [2024-07-15 14:36:47.853967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.444 [2024-07-15 14:36:47.855876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.444 [2024-07-15 14:36:47.855987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.444 [2024-07-15 14:36:47.856009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.444 [2024-07-15 14:36:47.856020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.444 [2024-07-15 14:36:47.856036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.444 [2024-07-15 14:36:47.856050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.444 [2024-07-15 14:36:47.856059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.444 [2024-07-15 14:36:47.856068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.444 [2024-07-15 14:36:47.856097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.444 [2024-07-15 14:36:47.863842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.444 [2024-07-15 14:36:47.863929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.444 [2024-07-15 14:36:47.863950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.444 [2024-07-15 14:36:47.863961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.444 [2024-07-15 14:36:47.863977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.444 [2024-07-15 14:36:47.863991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.444 [2024-07-15 14:36:47.864000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.444 [2024-07-15 14:36:47.864009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.444 [2024-07-15 14:36:47.864024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.444 [2024-07-15 14:36:47.865950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.444 [2024-07-15 14:36:47.866035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.444 [2024-07-15 14:36:47.866056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.444 [2024-07-15 14:36:47.866066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.444 [2024-07-15 14:36:47.866082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.444 [2024-07-15 14:36:47.866110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.444 [2024-07-15 14:36:47.866119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.444 [2024-07-15 14:36:47.866128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.444 [2024-07-15 14:36:47.866141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.444 [2024-07-15 14:36:47.873900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.444 [2024-07-15 14:36:47.873986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.444 [2024-07-15 14:36:47.874007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.444 [2024-07-15 14:36:47.874017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.444 [2024-07-15 14:36:47.874033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.444 [2024-07-15 14:36:47.874048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.444 [2024-07-15 14:36:47.874056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.444 [2024-07-15 14:36:47.874065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.444 [2024-07-15 14:36:47.874080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.444 [2024-07-15 14:36:47.876005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.444 [2024-07-15 14:36:47.876107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.444 [2024-07-15 14:36:47.876127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.444 [2024-07-15 14:36:47.876137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.444 [2024-07-15 14:36:47.876153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.444 [2024-07-15 14:36:47.876167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.444 [2024-07-15 14:36:47.876175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.444 [2024-07-15 14:36:47.876201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.444 [2024-07-15 14:36:47.876222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.444 [2024-07-15 14:36:47.883956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.444 [2024-07-15 14:36:47.884056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.444 [2024-07-15 14:36:47.884076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.444 [2024-07-15 14:36:47.884087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.444 [2024-07-15 14:36:47.884103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.444 [2024-07-15 14:36:47.884117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.444 [2024-07-15 14:36:47.884126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.444 [2024-07-15 14:36:47.884134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.444 [2024-07-15 14:36:47.884153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.444 [2024-07-15 14:36:47.886062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.444 [2024-07-15 14:36:47.886145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.444 [2024-07-15 14:36:47.886165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.444 [2024-07-15 14:36:47.886175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.444 [2024-07-15 14:36:47.886191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.444 [2024-07-15 14:36:47.886206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.444 [2024-07-15 14:36:47.886219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.444 [2024-07-15 14:36:47.886234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.444 [2024-07-15 14:36:47.886254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.444 [2024-07-15 14:36:47.894025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.444 [2024-07-15 14:36:47.894127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.444 [2024-07-15 14:36:47.894148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.444 [2024-07-15 14:36:47.894158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.444 [2024-07-15 14:36:47.894174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.444 [2024-07-15 14:36:47.894192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.444 [2024-07-15 14:36:47.894207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.444 [2024-07-15 14:36:47.894222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.444 [2024-07-15 14:36:47.894240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.444 [2024-07-15 14:36:47.896115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.444 [2024-07-15 14:36:47.896216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.445 [2024-07-15 14:36:47.896236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.445 [2024-07-15 14:36:47.896247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.445 [2024-07-15 14:36:47.896263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.445 [2024-07-15 14:36:47.896283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.445 [2024-07-15 14:36:47.896299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.445 [2024-07-15 14:36:47.896312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.445 [2024-07-15 14:36:47.896328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.445 [2024-07-15 14:36:47.904095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.445 [2024-07-15 14:36:47.904212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.445 [2024-07-15 14:36:47.904232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.445 [2024-07-15 14:36:47.904243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.445 [2024-07-15 14:36:47.904258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.445 [2024-07-15 14:36:47.904273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.445 [2024-07-15 14:36:47.904282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.445 [2024-07-15 14:36:47.904290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.445 [2024-07-15 14:36:47.904305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.445 [2024-07-15 14:36:47.906184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.445 [2024-07-15 14:36:47.906297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.445 [2024-07-15 14:36:47.906317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.445 [2024-07-15 14:36:47.906341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.445 [2024-07-15 14:36:47.906358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.445 [2024-07-15 14:36:47.906372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.445 [2024-07-15 14:36:47.906381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.445 [2024-07-15 14:36:47.906390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.445 [2024-07-15 14:36:47.906404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.445 [2024-07-15 14:36:47.914178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.445 [2024-07-15 14:36:47.914292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.445 [2024-07-15 14:36:47.914313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.445 [2024-07-15 14:36:47.914335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.445 [2024-07-15 14:36:47.914355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.445 [2024-07-15 14:36:47.914369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.445 [2024-07-15 14:36:47.914378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.445 [2024-07-15 14:36:47.914387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.445 [2024-07-15 14:36:47.914403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.445 [2024-07-15 14:36:47.916253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.445 [2024-07-15 14:36:47.916337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.445 [2024-07-15 14:36:47.916358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.445 [2024-07-15 14:36:47.916369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.445 [2024-07-15 14:36:47.916387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.445 [2024-07-15 14:36:47.916401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.445 [2024-07-15 14:36:47.916409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.445 [2024-07-15 14:36:47.916418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.445 [2024-07-15 14:36:47.916433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.445 [2024-07-15 14:36:47.924247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.445 [2024-07-15 14:36:47.924348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.445 [2024-07-15 14:36:47.924368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.445 [2024-07-15 14:36:47.924379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.445 [2024-07-15 14:36:47.924396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.445 [2024-07-15 14:36:47.924410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.445 [2024-07-15 14:36:47.924419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.445 [2024-07-15 14:36:47.924428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.445 [2024-07-15 14:36:47.924442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.445 [2024-07-15 14:36:47.926309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.445 [2024-07-15 14:36:47.926421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.445 [2024-07-15 14:36:47.926441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.445 [2024-07-15 14:36:47.926452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.445 [2024-07-15 14:36:47.926468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.445 [2024-07-15 14:36:47.926482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.445 [2024-07-15 14:36:47.926491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.445 [2024-07-15 14:36:47.926500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.445 [2024-07-15 14:36:47.926514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.445 [2024-07-15 14:36:47.934318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.445 [2024-07-15 14:36:47.934434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.445 [2024-07-15 14:36:47.934455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.445 [2024-07-15 14:36:47.934466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.445 [2024-07-15 14:36:47.934482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.445 [2024-07-15 14:36:47.934496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.445 [2024-07-15 14:36:47.934505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.445 [2024-07-15 14:36:47.934514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.445 [2024-07-15 14:36:47.934529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.445 [2024-07-15 14:36:47.936393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.445 [2024-07-15 14:36:47.936479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.445 [2024-07-15 14:36:47.936500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.445 [2024-07-15 14:36:47.936510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.445 [2024-07-15 14:36:47.936527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.445 [2024-07-15 14:36:47.936541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.445 [2024-07-15 14:36:47.936549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.445 [2024-07-15 14:36:47.936559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.445 [2024-07-15 14:36:47.936573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.445 [2024-07-15 14:36:47.944402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.445 [2024-07-15 14:36:47.944493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.445 [2024-07-15 14:36:47.944514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.445 [2024-07-15 14:36:47.944524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.445 [2024-07-15 14:36:47.944541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.445 [2024-07-15 14:36:47.944555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.445 [2024-07-15 14:36:47.944564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.445 [2024-07-15 14:36:47.944573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.445 [2024-07-15 14:36:47.944588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.445 [2024-07-15 14:36:47.946448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.445 [2024-07-15 14:36:47.946535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.445 [2024-07-15 14:36:47.946555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.445 [2024-07-15 14:36:47.946566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.445 [2024-07-15 14:36:47.946582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.445 [2024-07-15 14:36:47.946596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.445 [2024-07-15 14:36:47.946605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.446 [2024-07-15 14:36:47.946614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.446 [2024-07-15 14:36:47.946628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.446 [2024-07-15 14:36:47.954462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.446 [2024-07-15 14:36:47.954549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.446 [2024-07-15 14:36:47.954570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.446 [2024-07-15 14:36:47.954580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.446 [2024-07-15 14:36:47.954597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.446 [2024-07-15 14:36:47.954611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.446 [2024-07-15 14:36:47.954620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.446 [2024-07-15 14:36:47.954629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.446 [2024-07-15 14:36:47.954644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.446 [2024-07-15 14:36:47.956504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.446 [2024-07-15 14:36:47.956588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.446 [2024-07-15 14:36:47.956609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.446 [2024-07-15 14:36:47.956620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.446 [2024-07-15 14:36:47.956636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.446 [2024-07-15 14:36:47.956650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.446 [2024-07-15 14:36:47.956659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.446 [2024-07-15 14:36:47.956668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.446 [2024-07-15 14:36:47.956682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.446 [2024-07-15 14:36:47.964534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.446 [2024-07-15 14:36:47.964629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.446 [2024-07-15 14:36:47.964650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.446 [2024-07-15 14:36:47.964661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.446 [2024-07-15 14:36:47.964677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.446 [2024-07-15 14:36:47.964692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.446 [2024-07-15 14:36:47.964715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.446 [2024-07-15 14:36:47.964724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.446 [2024-07-15 14:36:47.964740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.446 [2024-07-15 14:36:47.966558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.446 [2024-07-15 14:36:47.966646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.446 [2024-07-15 14:36:47.966666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.446 [2024-07-15 14:36:47.966677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.446 [2024-07-15 14:36:47.966693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.446 [2024-07-15 14:36:47.966722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.446 [2024-07-15 14:36:47.966731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.446 [2024-07-15 14:36:47.966740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.446 [2024-07-15 14:36:47.966755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.446 [2024-07-15 14:36:47.974596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.446 [2024-07-15 14:36:47.974688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.446 [2024-07-15 14:36:47.974722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.446 [2024-07-15 14:36:47.974734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.446 [2024-07-15 14:36:47.974751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.446 [2024-07-15 14:36:47.974766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.446 [2024-07-15 14:36:47.974775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.446 [2024-07-15 14:36:47.974784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.446 [2024-07-15 14:36:47.974799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.446 [2024-07-15 14:36:47.976613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.446 [2024-07-15 14:36:47.976709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.446 [2024-07-15 14:36:47.976730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.446 [2024-07-15 14:36:47.976741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.446 [2024-07-15 14:36:47.976758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.446 [2024-07-15 14:36:47.976772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.446 [2024-07-15 14:36:47.976780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.446 [2024-07-15 14:36:47.976789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.446 [2024-07-15 14:36:47.976803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.446 [2024-07-15 14:36:47.984655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.446 [2024-07-15 14:36:47.984779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.446 [2024-07-15 14:36:47.984801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.446 [2024-07-15 14:36:47.984812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.446 [2024-07-15 14:36:47.984828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.446 [2024-07-15 14:36:47.984842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.446 [2024-07-15 14:36:47.984851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.446 [2024-07-15 14:36:47.984860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.446 [2024-07-15 14:36:47.984875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.446 [2024-07-15 14:36:47.986668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.446 [2024-07-15 14:36:47.986762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.446 [2024-07-15 14:36:47.986783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.446 [2024-07-15 14:36:47.986794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.446 [2024-07-15 14:36:47.986810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.446 [2024-07-15 14:36:47.986824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.446 [2024-07-15 14:36:47.986833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.446 [2024-07-15 14:36:47.986842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.446 [2024-07-15 14:36:47.986857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.446 [2024-07-15 14:36:47.994735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.446 [2024-07-15 14:36:47.994845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.446 [2024-07-15 14:36:47.994865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.446 [2024-07-15 14:36:47.994876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.446 [2024-07-15 14:36:47.994891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.446 [2024-07-15 14:36:47.994906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.446 [2024-07-15 14:36:47.994915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.446 [2024-07-15 14:36:47.994924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.447 [2024-07-15 14:36:47.994938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.447 [2024-07-15 14:36:47.996735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.447 [2024-07-15 14:36:47.996850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.447 [2024-07-15 14:36:47.996871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.447 [2024-07-15 14:36:47.996882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.447 [2024-07-15 14:36:47.996898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.447 [2024-07-15 14:36:47.996912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.447 [2024-07-15 14:36:47.996921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.448 [2024-07-15 14:36:47.996930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.448 [2024-07-15 14:36:47.996945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.448 [2024-07-15 14:36:48.004814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.448 [2024-07-15 14:36:48.004930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.448 [2024-07-15 14:36:48.004950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.448 [2024-07-15 14:36:48.004961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.448 [2024-07-15 14:36:48.004977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.448 [2024-07-15 14:36:48.004992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.448 [2024-07-15 14:36:48.005001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.448 [2024-07-15 14:36:48.005010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.448 [2024-07-15 14:36:48.005024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.448 [2024-07-15 14:36:48.006804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.448 [2024-07-15 14:36:48.006905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.448 [2024-07-15 14:36:48.006926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.448 [2024-07-15 14:36:48.006936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.448 [2024-07-15 14:36:48.006952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.448 [2024-07-15 14:36:48.006967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.448 [2024-07-15 14:36:48.006975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.448 [2024-07-15 14:36:48.006984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.448 [2024-07-15 14:36:48.006999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.448 [2024-07-15 14:36:48.014883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.448 [2024-07-15 14:36:48.014985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.448 [2024-07-15 14:36:48.015006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.448 [2024-07-15 14:36:48.015017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.448 [2024-07-15 14:36:48.015032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.448 [2024-07-15 14:36:48.015047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.448 [2024-07-15 14:36:48.015056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.448 [2024-07-15 14:36:48.015065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.448 [2024-07-15 14:36:48.015079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.448 [2024-07-15 14:36:48.016874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.448 [2024-07-15 14:36:48.016989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.448 [2024-07-15 14:36:48.017010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.448 [2024-07-15 14:36:48.017020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.448 [2024-07-15 14:36:48.017036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.448 [2024-07-15 14:36:48.017050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.448 [2024-07-15 14:36:48.017059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.448 [2024-07-15 14:36:48.017068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.448 [2024-07-15 14:36:48.017082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.448 [2024-07-15 14:36:48.024954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.448 [2024-07-15 14:36:48.025051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.448 [2024-07-15 14:36:48.025071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.448 [2024-07-15 14:36:48.025081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.448 [2024-07-15 14:36:48.025097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.448 [2024-07-15 14:36:48.025111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.448 [2024-07-15 14:36:48.025120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.448 [2024-07-15 14:36:48.025128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.448 [2024-07-15 14:36:48.025142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.448 [2024-07-15 14:36:48.026945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.448 [2024-07-15 14:36:48.027030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.448 [2024-07-15 14:36:48.027050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.448 [2024-07-15 14:36:48.027061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.448 [2024-07-15 14:36:48.027077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.448 [2024-07-15 14:36:48.027091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.448 [2024-07-15 14:36:48.027100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.448 [2024-07-15 14:36:48.027109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.448 [2024-07-15 14:36:48.027123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.709 [2024-07-15 14:36:48.035026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.709 [2024-07-15 14:36:48.035115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.709 [2024-07-15 14:36:48.035137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.710 [2024-07-15 14:36:48.035148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.710 [2024-07-15 14:36:48.035164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.710 [2024-07-15 14:36:48.035178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.710 [2024-07-15 14:36:48.035187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.710 [2024-07-15 14:36:48.035196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.710 [2024-07-15 14:36:48.035211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.710 [2024-07-15 14:36:48.037010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.710 [2024-07-15 14:36:48.037111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.710 [2024-07-15 14:36:48.037131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.710 [2024-07-15 14:36:48.037142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.710 [2024-07-15 14:36:48.037158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.710 [2024-07-15 14:36:48.037183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.710 [2024-07-15 14:36:48.037193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.710 [2024-07-15 14:36:48.037202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.710 [2024-07-15 14:36:48.037217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.710 [2024-07-15 14:36:48.045082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.710 [2024-07-15 14:36:48.045183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.710 [2024-07-15 14:36:48.045204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.710 [2024-07-15 14:36:48.045215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.710 [2024-07-15 14:36:48.045231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.710 [2024-07-15 14:36:48.045256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.710 [2024-07-15 14:36:48.045267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.710 [2024-07-15 14:36:48.045276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.710 [2024-07-15 14:36:48.045291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.710 [2024-07-15 14:36:48.047080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.710 [2024-07-15 14:36:48.047173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.710 [2024-07-15 14:36:48.047194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.710 [2024-07-15 14:36:48.047204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.710 [2024-07-15 14:36:48.047220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.710 [2024-07-15 14:36:48.047234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.710 [2024-07-15 14:36:48.047243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.710 [2024-07-15 14:36:48.047252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.710 [2024-07-15 14:36:48.047267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.710 [2024-07-15 14:36:48.055138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.710 [2024-07-15 14:36:48.055238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.710 [2024-07-15 14:36:48.055259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.710 [2024-07-15 14:36:48.055269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.710 [2024-07-15 14:36:48.055285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.710 [2024-07-15 14:36:48.055299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.710 [2024-07-15 14:36:48.055308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.710 [2024-07-15 14:36:48.055317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.710 [2024-07-15 14:36:48.055332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.710 [2024-07-15 14:36:48.057139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.710 [2024-07-15 14:36:48.057237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.710 [2024-07-15 14:36:48.057257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.710 [2024-07-15 14:36:48.057268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.710 [2024-07-15 14:36:48.057284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.710 [2024-07-15 14:36:48.057308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.710 [2024-07-15 14:36:48.057318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.710 [2024-07-15 14:36:48.057327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.710 [2024-07-15 14:36:48.057342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.710 [2024-07-15 14:36:48.065193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.710 [2024-07-15 14:36:48.065298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.710 [2024-07-15 14:36:48.065320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.710 [2024-07-15 14:36:48.065331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.710 [2024-07-15 14:36:48.065347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.710 [2024-07-15 14:36:48.065373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.710 [2024-07-15 14:36:48.065390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.710 [2024-07-15 14:36:48.065405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.710 [2024-07-15 14:36:48.065423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.710 [2024-07-15 14:36:48.067191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.710 [2024-07-15 14:36:48.067276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.710 [2024-07-15 14:36:48.067297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.710 [2024-07-15 14:36:48.067307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.710 [2024-07-15 14:36:48.067323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.710 [2024-07-15 14:36:48.067337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.710 [2024-07-15 14:36:48.067346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.710 [2024-07-15 14:36:48.067356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.710 [2024-07-15 14:36:48.067378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.710 [2024-07-15 14:36:48.075263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.710 [2024-07-15 14:36:48.075360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.710 [2024-07-15 14:36:48.075381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.710 [2024-07-15 14:36:48.075392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.710 [2024-07-15 14:36:48.075407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.710 [2024-07-15 14:36:48.075422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.710 [2024-07-15 14:36:48.075446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.710 [2024-07-15 14:36:48.075474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.710 [2024-07-15 14:36:48.075496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.710 [2024-07-15 14:36:48.077244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.710 [2024-07-15 14:36:48.077338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.710 [2024-07-15 14:36:48.077358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.710 [2024-07-15 14:36:48.077368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.710 [2024-07-15 14:36:48.077383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.710 [2024-07-15 14:36:48.077425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.710 [2024-07-15 14:36:48.077442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.710 [2024-07-15 14:36:48.077455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.710 [2024-07-15 14:36:48.077470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.710 [2024-07-15 14:36:48.085315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.710 [2024-07-15 14:36:48.085429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.710 [2024-07-15 14:36:48.085450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.710 [2024-07-15 14:36:48.085460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.710 [2024-07-15 14:36:48.085476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.710 [2024-07-15 14:36:48.085505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.710 [2024-07-15 14:36:48.085523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.710 [2024-07-15 14:36:48.085537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.710 [2024-07-15 14:36:48.085554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.710 [2024-07-15 14:36:48.087293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.710 [2024-07-15 14:36:48.087392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.710 [2024-07-15 14:36:48.087413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.710 [2024-07-15 14:36:48.087424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.710 [2024-07-15 14:36:48.087440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.710 [2024-07-15 14:36:48.087457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.710 [2024-07-15 14:36:48.087471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.710 [2024-07-15 14:36:48.087485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.710 [2024-07-15 14:36:48.087505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.710 [2024-07-15 14:36:48.095368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.710 [2024-07-15 14:36:48.095495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.710 [2024-07-15 14:36:48.095515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.710 [2024-07-15 14:36:48.095525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.095540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.095553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.095580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.095610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.711 [2024-07-15 14:36:48.095631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.097345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.711 [2024-07-15 14:36:48.097443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.097463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.711 [2024-07-15 14:36:48.097488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.097504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.097547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.097564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.097574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.711 [2024-07-15 14:36:48.097589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.105443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.711 [2024-07-15 14:36:48.105577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.105599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.711 [2024-07-15 14:36:48.105609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.105636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.105669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.105699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.105713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.711 [2024-07-15 14:36:48.105734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.107398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.711 [2024-07-15 14:36:48.107508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.107527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.711 [2024-07-15 14:36:48.107537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.107552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.107582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.107611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.107625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.711 [2024-07-15 14:36:48.107642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.115526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.711 [2024-07-15 14:36:48.115622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.115642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.711 [2024-07-15 14:36:48.115651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.115667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.115680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.115704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.115716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.711 [2024-07-15 14:36:48.115767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.117448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.711 [2024-07-15 14:36:48.117542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.117561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.711 [2024-07-15 14:36:48.117571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.117586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.117627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.117659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.117672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.711 [2024-07-15 14:36:48.117687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.125577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.711 [2024-07-15 14:36:48.125672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.125709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.711 [2024-07-15 14:36:48.125719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.125773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.125799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.125812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.125821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.711 [2024-07-15 14:36:48.125839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.127500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.711 [2024-07-15 14:36:48.127596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.127616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.711 [2024-07-15 14:36:48.127626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.127641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.127654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.127662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.127689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.711 [2024-07-15 14:36:48.127726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.135628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.711 [2024-07-15 14:36:48.135738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.135759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.711 [2024-07-15 14:36:48.135770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.135786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.135800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.135809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.135817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.711 [2024-07-15 14:36:48.135849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.137551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.711 [2024-07-15 14:36:48.137650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.137671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.711 [2024-07-15 14:36:48.137681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.137697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.137740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.137761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.137774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.711 [2024-07-15 14:36:48.137791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.145681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.711 [2024-07-15 14:36:48.145786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.145807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.711 [2024-07-15 14:36:48.145817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.145843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.145858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.145867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.145875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.711 [2024-07-15 14:36:48.145911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.147604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.711 [2024-07-15 14:36:48.147703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.147736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.711 [2024-07-15 14:36:48.147748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.147765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.147779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.147788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.147796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.711 [2024-07-15 14:36:48.147811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.155743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.711 [2024-07-15 14:36:48.155847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.155867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.711 [2024-07-15 14:36:48.155877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.155893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.155907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.155916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.155942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.711 [2024-07-15 14:36:48.155963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.157655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.711 [2024-07-15 14:36:48.157762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.157783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.711 [2024-07-15 14:36:48.157794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.157809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.157849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.157865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.157879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.711 [2024-07-15 14:36:48.157896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.711 [2024-07-15 14:36:48.165802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.711 [2024-07-15 14:36:48.165901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.711 [2024-07-15 14:36:48.165936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.711 [2024-07-15 14:36:48.165947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.711 [2024-07-15 14:36:48.165972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.711 [2024-07-15 14:36:48.166003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.711 [2024-07-15 14:36:48.166017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.711 [2024-07-15 14:36:48.166032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.712 [2024-07-15 14:36:48.166052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.167708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.712 [2024-07-15 14:36:48.167830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.167870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.712 [2024-07-15 14:36:48.167881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.167898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.167920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.167936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.167947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.712 [2024-07-15 14:36:48.167962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.175858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.712 [2024-07-15 14:36:48.175946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.175967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.712 [2024-07-15 14:36:48.175977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.175993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.176008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.176016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.176025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.712 [2024-07-15 14:36:48.176046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.177780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.712 [2024-07-15 14:36:48.177872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.177893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.712 [2024-07-15 14:36:48.177904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.177920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.177949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.177966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.177980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.712 [2024-07-15 14:36:48.177996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.185914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.712 [2024-07-15 14:36:48.186012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.186032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.712 [2024-07-15 14:36:48.186042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.186067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.186099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.186108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.186122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.712 [2024-07-15 14:36:48.186144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.187838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.712 [2024-07-15 14:36:48.187935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.187956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.712 [2024-07-15 14:36:48.187966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.187983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.187997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.188005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.188016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.712 [2024-07-15 14:36:48.188036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.195968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.712 [2024-07-15 14:36:48.196066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.196086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.712 [2024-07-15 14:36:48.196096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.196112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.196126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.196135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.196159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.712 [2024-07-15 14:36:48.196179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.197888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.712 [2024-07-15 14:36:48.197984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.198004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.712 [2024-07-15 14:36:48.198014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.198029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.198052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.198080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.198094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.712 [2024-07-15 14:36:48.198115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.206020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.712 [2024-07-15 14:36:48.206118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.206138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.712 [2024-07-15 14:36:48.206148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.206174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.206189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.206198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.206226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.712 [2024-07-15 14:36:48.206248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.207940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.712 [2024-07-15 14:36:48.208038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.208059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.712 [2024-07-15 14:36:48.208069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.208085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.208100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.208108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.208118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.712 [2024-07-15 14:36:48.208140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.216074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.712 [2024-07-15 14:36:48.216172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.216192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.712 [2024-07-15 14:36:48.216202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.216218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.216232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.216256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.216269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.712 [2024-07-15 14:36:48.216292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.217992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.712 [2024-07-15 14:36:48.218104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.218124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.712 [2024-07-15 14:36:48.218134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.218150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.218190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.218205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.218219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.712 [2024-07-15 14:36:48.218237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.226128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.712 [2024-07-15 14:36:48.226240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.226260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.712 [2024-07-15 14:36:48.226270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.226294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.226309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.226318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.226360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.712 [2024-07-15 14:36:48.226381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.228045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.712 [2024-07-15 14:36:48.228157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.228177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.712 [2024-07-15 14:36:48.228187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.228203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.228216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.228242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.228255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.712 [2024-07-15 14:36:48.228277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.236180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.712 [2024-07-15 14:36:48.236282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.236303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.712 [2024-07-15 14:36:48.236313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.712 [2024-07-15 14:36:48.236329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.712 [2024-07-15 14:36:48.236343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.712 [2024-07-15 14:36:48.236352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.712 [2024-07-15 14:36:48.236366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.712 [2024-07-15 14:36:48.236388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.712 [2024-07-15 14:36:48.238098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.712 [2024-07-15 14:36:48.238211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.712 [2024-07-15 14:36:48.238231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.712 [2024-07-15 14:36:48.238242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.713 [2024-07-15 14:36:48.238258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.713 [2024-07-15 14:36:48.238293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.713 [2024-07-15 14:36:48.238306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.713 [2024-07-15 14:36:48.238315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.713 [2024-07-15 14:36:48.238341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.713 [2024-07-15 14:36:48.246235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.713 [2024-07-15 14:36:48.246345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.713 [2024-07-15 14:36:48.246366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.713 [2024-07-15 14:36:48.246377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.713 [2024-07-15 14:36:48.246403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.713 [2024-07-15 14:36:48.246423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.713 [2024-07-15 14:36:48.246438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.713 [2024-07-15 14:36:48.246452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.713 [2024-07-15 14:36:48.246468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.713 [2024-07-15 14:36:48.248164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.713 [2024-07-15 14:36:48.248280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.713 [2024-07-15 14:36:48.248300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.713 [2024-07-15 14:36:48.248311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.713 [2024-07-15 14:36:48.248327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.713 [2024-07-15 14:36:48.248347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.713 [2024-07-15 14:36:48.248361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.713 [2024-07-15 14:36:48.248373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.713 [2024-07-15 14:36:48.248388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.713 [2024-07-15 14:36:48.256291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.713 [2024-07-15 14:36:48.256379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.713 [2024-07-15 14:36:48.256399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.713 [2024-07-15 14:36:48.256409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.713 [2024-07-15 14:36:48.256426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.713 [2024-07-15 14:36:48.256441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.713 [2024-07-15 14:36:48.256454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.713 [2024-07-15 14:36:48.256468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.713 [2024-07-15 14:36:48.256488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.713 [2024-07-15 14:36:48.258233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.713 [2024-07-15 14:36:48.258317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.713 [2024-07-15 14:36:48.258351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.713 [2024-07-15 14:36:48.258362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.713 [2024-07-15 14:36:48.258379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.713 [2024-07-15 14:36:48.258413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.713 [2024-07-15 14:36:48.258431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.713 [2024-07-15 14:36:48.258442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.713 [2024-07-15 14:36:48.258461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.713 [2024-07-15 14:36:48.266362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.713 [2024-07-15 14:36:48.266448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.713 [2024-07-15 14:36:48.266469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.713 [2024-07-15 14:36:48.266479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.713 [2024-07-15 14:36:48.266506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.713 [2024-07-15 14:36:48.266530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.713 [2024-07-15 14:36:48.266543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.713 [2024-07-15 14:36:48.266553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.713 [2024-07-15 14:36:48.266568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.713 [2024-07-15 14:36:48.268288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.713 [2024-07-15 14:36:48.268371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.713 [2024-07-15 14:36:48.268391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.713 [2024-07-15 14:36:48.268402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.713 [2024-07-15 14:36:48.268418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.713 [2024-07-15 14:36:48.268436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.713 [2024-07-15 14:36:48.268450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.713 [2024-07-15 14:36:48.268462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.713 [2024-07-15 14:36:48.268478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.713 [2024-07-15 14:36:48.276419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.713 [2024-07-15 14:36:48.276514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.713 [2024-07-15 14:36:48.276545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.713 [2024-07-15 14:36:48.276555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.713 [2024-07-15 14:36:48.276572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.713 [2024-07-15 14:36:48.276588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.713 [2024-07-15 14:36:48.276603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.713 [2024-07-15 14:36:48.276618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.713 [2024-07-15 14:36:48.276634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.713 [2024-07-15 14:36:48.278346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.713 [2024-07-15 14:36:48.278432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.713 [2024-07-15 14:36:48.278453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.713 [2024-07-15 14:36:48.278463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.713 [2024-07-15 14:36:48.278479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.713 [2024-07-15 14:36:48.278507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.713 [2024-07-15 14:36:48.278524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.713 [2024-07-15 14:36:48.278536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.713 [2024-07-15 14:36:48.278552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.713 [2024-07-15 14:36:48.286480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.713 [2024-07-15 14:36:48.286584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.713 [2024-07-15 14:36:48.286605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.713 [2024-07-15 14:36:48.286615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.713 [2024-07-15 14:36:48.286631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.713 [2024-07-15 14:36:48.286649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.713 [2024-07-15 14:36:48.286664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.713 [2024-07-15 14:36:48.286677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.713 [2024-07-15 14:36:48.286693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.713 [2024-07-15 14:36:48.288400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.713 [2024-07-15 14:36:48.288483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.713 [2024-07-15 14:36:48.288504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.713 [2024-07-15 14:36:48.288514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.713 [2024-07-15 14:36:48.288530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.713 [2024-07-15 14:36:48.288544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.713 [2024-07-15 14:36:48.288557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.713 [2024-07-15 14:36:48.288571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.713 [2024-07-15 14:36:48.288590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.713 [2024-07-15 14:36:48.296537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.713 [2024-07-15 14:36:48.296640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.713 [2024-07-15 14:36:48.296660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.713 [2024-07-15 14:36:48.296671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.713 [2024-07-15 14:36:48.296686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.713 [2024-07-15 14:36:48.296716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.713 [2024-07-15 14:36:48.296747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.713 [2024-07-15 14:36:48.296759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.713 [2024-07-15 14:36:48.296776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.713 [2024-07-15 14:36:48.298454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.713 [2024-07-15 14:36:48.298539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.713 [2024-07-15 14:36:48.298560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.713 [2024-07-15 14:36:48.298570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.713 [2024-07-15 14:36:48.298595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.713 [2024-07-15 14:36:48.298616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.713 [2024-07-15 14:36:48.298631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.713 [2024-07-15 14:36:48.298642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.713 [2024-07-15 14:36:48.298658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.974 [2024-07-15 14:36:48.306593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.975 [2024-07-15 14:36:48.306680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.975 [2024-07-15 14:36:48.306711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.975 [2024-07-15 14:36:48.306724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.975 [2024-07-15 14:36:48.306740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.975 [2024-07-15 14:36:48.306761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.975 [2024-07-15 14:36:48.306776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.975 [2024-07-15 14:36:48.306786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.975 [2024-07-15 14:36:48.306801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.975 [2024-07-15 14:36:48.308510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.975 [2024-07-15 14:36:48.308596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.975 [2024-07-15 14:36:48.308616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.975 [2024-07-15 14:36:48.308627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.975 [2024-07-15 14:36:48.308643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.975 [2024-07-15 14:36:48.308658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.975 [2024-07-15 14:36:48.308671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.975 [2024-07-15 14:36:48.308685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.975 [2024-07-15 14:36:48.308718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.975 [2024-07-15 14:36:48.316650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.975 [2024-07-15 14:36:48.316775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.975 [2024-07-15 14:36:48.316796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.975 [2024-07-15 14:36:48.316807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.975 [2024-07-15 14:36:48.316823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.975 [2024-07-15 14:36:48.316841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.975 [2024-07-15 14:36:48.316856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.975 [2024-07-15 14:36:48.316868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.975 [2024-07-15 14:36:48.316884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.975 [2024-07-15 14:36:48.318565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.975 [2024-07-15 14:36:48.318652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.975 [2024-07-15 14:36:48.318672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.975 [2024-07-15 14:36:48.318683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.975 [2024-07-15 14:36:48.318718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.975 [2024-07-15 14:36:48.318738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.975 [2024-07-15 14:36:48.318747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.975 [2024-07-15 14:36:48.318765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.975 [2024-07-15 14:36:48.318780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.975 [2024-07-15 14:36:48.326747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.975 [2024-07-15 14:36:48.326833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.975 [2024-07-15 14:36:48.326854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.975 [2024-07-15 14:36:48.326864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.975 [2024-07-15 14:36:48.326880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.975 [2024-07-15 14:36:48.326899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.975 [2024-07-15 14:36:48.326914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.975 [2024-07-15 14:36:48.326926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.975 [2024-07-15 14:36:48.326941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.975 [2024-07-15 14:36:48.328623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.975 [2024-07-15 14:36:48.328718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.975 [2024-07-15 14:36:48.328740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.975 [2024-07-15 14:36:48.328751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.975 [2024-07-15 14:36:48.328767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.975 [2024-07-15 14:36:48.328786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.975 [2024-07-15 14:36:48.328800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.975 [2024-07-15 14:36:48.328812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.975 [2024-07-15 14:36:48.328827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.975 [2024-07-15 14:36:48.336804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.975 [2024-07-15 14:36:48.336895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.975 [2024-07-15 14:36:48.336916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.975 [2024-07-15 14:36:48.336926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.975 [2024-07-15 14:36:48.336943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.975 [2024-07-15 14:36:48.336957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.975 [2024-07-15 14:36:48.336966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.975 [2024-07-15 14:36:48.336975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.975 [2024-07-15 14:36:48.336990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.975 [2024-07-15 14:36:48.338678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.975 [2024-07-15 14:36:48.338783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.975 [2024-07-15 14:36:48.338804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.975 [2024-07-15 14:36:48.338814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.975 [2024-07-15 14:36:48.338831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.975 [2024-07-15 14:36:48.338845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.975 [2024-07-15 14:36:48.338854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.975 [2024-07-15 14:36:48.338863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.975 [2024-07-15 14:36:48.338881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.975 [2024-07-15 14:36:48.346863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.975 [2024-07-15 14:36:48.346952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.975 [2024-07-15 14:36:48.346973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.975 [2024-07-15 14:36:48.346983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.975 [2024-07-15 14:36:48.347000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.975 [2024-07-15 14:36:48.347014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.975 [2024-07-15 14:36:48.347024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.975 [2024-07-15 14:36:48.347037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.975 [2024-07-15 14:36:48.347059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.975 [2024-07-15 14:36:48.348743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.975 [2024-07-15 14:36:48.348828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.975 [2024-07-15 14:36:48.348848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.975 [2024-07-15 14:36:48.348859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.975 [2024-07-15 14:36:48.348875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.975 [2024-07-15 14:36:48.348890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.975 [2024-07-15 14:36:48.348900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.975 [2024-07-15 14:36:48.348914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.975 [2024-07-15 14:36:48.348934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.975 [2024-07-15 14:36:48.356919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.975 [2024-07-15 14:36:48.357004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.975 [2024-07-15 14:36:48.357025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.975 [2024-07-15 14:36:48.357036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.975 [2024-07-15 14:36:48.357052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.975 [2024-07-15 14:36:48.357067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.975 [2024-07-15 14:36:48.357081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.975 [2024-07-15 14:36:48.357096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.976 [2024-07-15 14:36:48.357114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.976 [2024-07-15 14:36:48.358796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.976 [2024-07-15 14:36:48.358882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.976 [2024-07-15 14:36:48.358902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.976 [2024-07-15 14:36:48.358912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.976 [2024-07-15 14:36:48.358929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.976 [2024-07-15 14:36:48.358943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.976 [2024-07-15 14:36:48.358951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.976 [2024-07-15 14:36:48.358960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.976 [2024-07-15 14:36:48.358975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.976 [2024-07-15 14:36:48.366975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.976 [2024-07-15 14:36:48.367080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.976 [2024-07-15 14:36:48.367101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.976 [2024-07-15 14:36:48.367111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.976 [2024-07-15 14:36:48.367127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.976 [2024-07-15 14:36:48.367142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.976 [2024-07-15 14:36:48.367151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.976 [2024-07-15 14:36:48.367160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.976 [2024-07-15 14:36:48.367177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.976 [2024-07-15 14:36:48.368851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.976 [2024-07-15 14:36:48.368932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.976 [2024-07-15 14:36:48.368952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.976 [2024-07-15 14:36:48.368963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.976 [2024-07-15 14:36:48.368979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.976 [2024-07-15 14:36:48.368993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.976 [2024-07-15 14:36:48.369002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.976 [2024-07-15 14:36:48.369011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.976 [2024-07-15 14:36:48.369025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.976 [2024-07-15 14:36:48.377054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.976 [2024-07-15 14:36:48.377158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.976 [2024-07-15 14:36:48.377181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.976 [2024-07-15 14:36:48.377192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.976 [2024-07-15 14:36:48.377208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.976 [2024-07-15 14:36:48.377223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.976 [2024-07-15 14:36:48.377236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.976 [2024-07-15 14:36:48.377249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.976 [2024-07-15 14:36:48.377265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.976 [2024-07-15 14:36:48.378903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.976 [2024-07-15 14:36:48.378988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.976 [2024-07-15 14:36:48.379008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.976 [2024-07-15 14:36:48.379019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.976 [2024-07-15 14:36:48.379034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.976 [2024-07-15 14:36:48.379049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.976 [2024-07-15 14:36:48.379058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.976 [2024-07-15 14:36:48.379067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.976 [2024-07-15 14:36:48.379081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.976 [2024-07-15 14:36:48.387119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.976 [2024-07-15 14:36:48.387204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.976 [2024-07-15 14:36:48.387225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.976 [2024-07-15 14:36:48.387240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.976 [2024-07-15 14:36:48.387257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.976 [2024-07-15 14:36:48.387271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.976 [2024-07-15 14:36:48.387280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.976 [2024-07-15 14:36:48.387289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.976 [2024-07-15 14:36:48.387304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.976 [2024-07-15 14:36:48.388957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.976 [2024-07-15 14:36:48.389041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.976 [2024-07-15 14:36:48.389062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.976 [2024-07-15 14:36:48.389073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.976 [2024-07-15 14:36:48.389089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.976 [2024-07-15 14:36:48.389104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.976 [2024-07-15 14:36:48.389112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.976 [2024-07-15 14:36:48.389121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.976 [2024-07-15 14:36:48.389135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.976 [2024-07-15 14:36:48.397174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.976 [2024-07-15 14:36:48.397275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.976 [2024-07-15 14:36:48.397295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.976 [2024-07-15 14:36:48.397306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.976 [2024-07-15 14:36:48.397323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.976 [2024-07-15 14:36:48.397337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.976 [2024-07-15 14:36:48.397346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.976 [2024-07-15 14:36:48.397355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.976 [2024-07-15 14:36:48.397369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.976 [2024-07-15 14:36:48.399010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.976 [2024-07-15 14:36:48.399109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.976 [2024-07-15 14:36:48.399128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.976 [2024-07-15 14:36:48.399139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.976 [2024-07-15 14:36:48.399155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.976 [2024-07-15 14:36:48.399172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.976 [2024-07-15 14:36:48.399187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.976 [2024-07-15 14:36:48.399200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.976 [2024-07-15 14:36:48.399215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.976 [2024-07-15 14:36:48.407228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.976 [2024-07-15 14:36:48.407313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.976 [2024-07-15 14:36:48.407333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.976 [2024-07-15 14:36:48.407344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.976 [2024-07-15 14:36:48.407360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.976 [2024-07-15 14:36:48.407374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.976 [2024-07-15 14:36:48.407387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.976 [2024-07-15 14:36:48.407402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.976 [2024-07-15 14:36:48.407421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.976 [2024-07-15 14:36:48.409064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.976 [2024-07-15 14:36:48.409148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.976 [2024-07-15 14:36:48.409168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.976 [2024-07-15 14:36:48.409178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.976 [2024-07-15 14:36:48.409194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.976 [2024-07-15 14:36:48.409209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.976 [2024-07-15 14:36:48.409223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.977 [2024-07-15 14:36:48.409238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.977 [2024-07-15 14:36:48.409256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.977 [2024-07-15 14:36:48.417285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.977 [2024-07-15 14:36:48.417376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.977 [2024-07-15 14:36:48.417397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.977 [2024-07-15 14:36:48.417407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.977 [2024-07-15 14:36:48.417423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.977 [2024-07-15 14:36:48.417438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.977 [2024-07-15 14:36:48.417446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.977 [2024-07-15 14:36:48.417455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.977 [2024-07-15 14:36:48.417470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.977 [2024-07-15 14:36:48.419119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.977 [2024-07-15 14:36:48.419205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.977 [2024-07-15 14:36:48.419225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.977 [2024-07-15 14:36:48.419235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.977 [2024-07-15 14:36:48.419252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.977 [2024-07-15 14:36:48.419266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.977 [2024-07-15 14:36:48.419277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.977 [2024-07-15 14:36:48.419291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.977 [2024-07-15 14:36:48.419321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.977 [2024-07-15 14:36:48.427346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.977 [2024-07-15 14:36:48.427432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.977 [2024-07-15 14:36:48.427452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.977 [2024-07-15 14:36:48.427463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.977 [2024-07-15 14:36:48.427479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.977 [2024-07-15 14:36:48.427494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.977 [2024-07-15 14:36:48.427508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.977 [2024-07-15 14:36:48.427523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.977 [2024-07-15 14:36:48.427544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.977 [2024-07-15 14:36:48.429176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.977 [2024-07-15 14:36:48.429262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.977 [2024-07-15 14:36:48.429282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.977 [2024-07-15 14:36:48.429293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.977 [2024-07-15 14:36:48.429308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.977 [2024-07-15 14:36:48.429326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.977 [2024-07-15 14:36:48.429341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.977 [2024-07-15 14:36:48.429354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.977 [2024-07-15 14:36:48.429370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.977 [2024-07-15 14:36:48.437403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.977 [2024-07-15 14:36:48.437491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.977 [2024-07-15 14:36:48.437512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.977 [2024-07-15 14:36:48.437522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.977 [2024-07-15 14:36:48.437540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.977 [2024-07-15 14:36:48.437555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.977 [2024-07-15 14:36:48.437563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.977 [2024-07-15 14:36:48.437572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.977 [2024-07-15 14:36:48.437587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.977 [2024-07-15 14:36:48.439232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.977 [2024-07-15 14:36:48.439319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.977 [2024-07-15 14:36:48.439339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.977 [2024-07-15 14:36:48.439350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.977 [2024-07-15 14:36:48.439366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.977 [2024-07-15 14:36:48.439381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.977 [2024-07-15 14:36:48.439389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.977 [2024-07-15 14:36:48.439398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.977 [2024-07-15 14:36:48.439413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.977 [2024-07-15 14:36:48.447462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.977 [2024-07-15 14:36:48.447550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.977 [2024-07-15 14:36:48.447570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.977 [2024-07-15 14:36:48.447581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.977 [2024-07-15 14:36:48.447597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.977 [2024-07-15 14:36:48.447611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.977 [2024-07-15 14:36:48.447620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.977 [2024-07-15 14:36:48.447630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.977 [2024-07-15 14:36:48.447644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.977 [2024-07-15 14:36:48.449288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.977 [2024-07-15 14:36:48.449371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.977 [2024-07-15 14:36:48.449392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.977 [2024-07-15 14:36:48.449403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.977 [2024-07-15 14:36:48.449420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.977 [2024-07-15 14:36:48.449434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.977 [2024-07-15 14:36:48.449443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.977 [2024-07-15 14:36:48.449452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.977 [2024-07-15 14:36:48.449466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.977 [2024-07-15 14:36:48.457519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.977 [2024-07-15 14:36:48.457607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.977 [2024-07-15 14:36:48.457628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.977 [2024-07-15 14:36:48.457638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.977 [2024-07-15 14:36:48.457655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.977 [2024-07-15 14:36:48.457669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.977 [2024-07-15 14:36:48.457678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.977 [2024-07-15 14:36:48.457687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.977 [2024-07-15 14:36:48.457714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.977 [2024-07-15 14:36:48.459358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.977 [2024-07-15 14:36:48.459443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.977 [2024-07-15 14:36:48.459463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.977 [2024-07-15 14:36:48.459474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.977 [2024-07-15 14:36:48.459490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.977 [2024-07-15 14:36:48.459504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.977 [2024-07-15 14:36:48.459513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.977 [2024-07-15 14:36:48.459521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.977 [2024-07-15 14:36:48.459536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.977 [2024-07-15 14:36:48.467577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.977 [2024-07-15 14:36:48.467662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.977 [2024-07-15 14:36:48.467681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.977 [2024-07-15 14:36:48.467692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.977 [2024-07-15 14:36:48.467722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.977 [2024-07-15 14:36:48.467744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.978 [2024-07-15 14:36:48.467760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.978 [2024-07-15 14:36:48.467769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.978 [2024-07-15 14:36:48.467785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.978 [2024-07-15 14:36:48.469414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.978 [2024-07-15 14:36:48.469496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.978 [2024-07-15 14:36:48.469516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.978 [2024-07-15 14:36:48.469526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.978 [2024-07-15 14:36:48.469542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.978 [2024-07-15 14:36:48.469560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.978 [2024-07-15 14:36:48.469571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.978 [2024-07-15 14:36:48.469585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.978 [2024-07-15 14:36:48.469605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.978 [2024-07-15 14:36:48.477631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.978 [2024-07-15 14:36:48.477727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.978 [2024-07-15 14:36:48.477748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.978 [2024-07-15 14:36:48.477759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.978 [2024-07-15 14:36:48.477776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.978 [2024-07-15 14:36:48.477790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.978 [2024-07-15 14:36:48.477798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.978 [2024-07-15 14:36:48.477807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.978 [2024-07-15 14:36:48.477822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.978 [2024-07-15 14:36:48.479467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.978 [2024-07-15 14:36:48.479559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.978 [2024-07-15 14:36:48.479579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.978 [2024-07-15 14:36:48.479590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.978 [2024-07-15 14:36:48.479606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.978 [2024-07-15 14:36:48.479621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.978 [2024-07-15 14:36:48.479629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.978 [2024-07-15 14:36:48.479638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.978 [2024-07-15 14:36:48.479652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.978 [2024-07-15 14:36:48.487688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.978 [2024-07-15 14:36:48.487894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.978 [2024-07-15 14:36:48.487922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.978 [2024-07-15 14:36:48.487935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.978 [2024-07-15 14:36:48.487961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.978 [2024-07-15 14:36:48.487983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.978 [2024-07-15 14:36:48.487993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.978 [2024-07-15 14:36:48.488002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.978 [2024-07-15 14:36:48.488019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.978 [2024-07-15 14:36:48.489529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.978 [2024-07-15 14:36:48.489619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.978 [2024-07-15 14:36:48.489640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.978 [2024-07-15 14:36:48.489650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.978 [2024-07-15 14:36:48.489667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.978 [2024-07-15 14:36:48.489681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.978 [2024-07-15 14:36:48.489690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.978 [2024-07-15 14:36:48.489712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.978 [2024-07-15 14:36:48.489728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.978 [2024-07-15 14:36:48.497840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.978 [2024-07-15 14:36:48.497935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.978 [2024-07-15 14:36:48.497956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.978 [2024-07-15 14:36:48.497967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.978 [2024-07-15 14:36:48.497984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.978 [2024-07-15 14:36:48.497998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.978 [2024-07-15 14:36:48.498007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.978 [2024-07-15 14:36:48.498017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.978 [2024-07-15 14:36:48.498031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.978 [2024-07-15 14:36:48.499589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.978 [2024-07-15 14:36:48.499676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.978 [2024-07-15 14:36:48.499709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.978 [2024-07-15 14:36:48.499722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.978 [2024-07-15 14:36:48.499739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.978 [2024-07-15 14:36:48.499753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.978 [2024-07-15 14:36:48.499762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.978 [2024-07-15 14:36:48.499771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.978 [2024-07-15 14:36:48.499786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.978 [2024-07-15 14:36:48.507900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.978 [2024-07-15 14:36:48.507987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.978 [2024-07-15 14:36:48.508007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.978 [2024-07-15 14:36:48.508018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.978 [2024-07-15 14:36:48.508034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.978 [2024-07-15 14:36:48.508048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.978 [2024-07-15 14:36:48.508058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.978 [2024-07-15 14:36:48.508066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.978 [2024-07-15 14:36:48.508081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.978 [2024-07-15 14:36:48.509646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.978 [2024-07-15 14:36:48.509741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.978 [2024-07-15 14:36:48.509762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.978 [2024-07-15 14:36:48.509772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.978 [2024-07-15 14:36:48.509789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.978 [2024-07-15 14:36:48.509803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.978 [2024-07-15 14:36:48.509812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.978 [2024-07-15 14:36:48.509821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.978 [2024-07-15 14:36:48.509836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.978 [2024-07-15 14:36:48.517957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.978 [2024-07-15 14:36:48.518043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.978 [2024-07-15 14:36:48.518063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.978 [2024-07-15 14:36:48.518074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.978 [2024-07-15 14:36:48.518091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.978 [2024-07-15 14:36:48.518105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.978 [2024-07-15 14:36:48.518114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.978 [2024-07-15 14:36:48.518123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.978 [2024-07-15 14:36:48.518137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.978 [2024-07-15 14:36:48.519700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.978 [2024-07-15 14:36:48.519825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.978 [2024-07-15 14:36:48.519848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.978 [2024-07-15 14:36:48.519859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.978 [2024-07-15 14:36:48.519875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.978 [2024-07-15 14:36:48.519889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.979 [2024-07-15 14:36:48.519898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.979 [2024-07-15 14:36:48.519907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.979 [2024-07-15 14:36:48.519921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.979 [2024-07-15 14:36:48.528013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.979 [2024-07-15 14:36:48.528113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.979 [2024-07-15 14:36:48.528134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.979 [2024-07-15 14:36:48.528145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.979 [2024-07-15 14:36:48.528161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.979 [2024-07-15 14:36:48.528176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.979 [2024-07-15 14:36:48.528184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.979 [2024-07-15 14:36:48.528193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.979 [2024-07-15 14:36:48.528208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.979 [2024-07-15 14:36:48.529798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.979 [2024-07-15 14:36:48.529882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.979 [2024-07-15 14:36:48.529903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.979 [2024-07-15 14:36:48.529913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.979 [2024-07-15 14:36:48.529931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.979 [2024-07-15 14:36:48.529945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.979 [2024-07-15 14:36:48.529954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.979 [2024-07-15 14:36:48.529963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.979 [2024-07-15 14:36:48.529978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.979 [2024-07-15 14:36:48.538086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.979 [2024-07-15 14:36:48.538173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.979 [2024-07-15 14:36:48.538193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.979 [2024-07-15 14:36:48.538204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.979 [2024-07-15 14:36:48.538221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.979 [2024-07-15 14:36:48.538235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.979 [2024-07-15 14:36:48.538244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.979 [2024-07-15 14:36:48.538253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.979 [2024-07-15 14:36:48.538268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.979 [2024-07-15 14:36:48.539852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.979 [2024-07-15 14:36:48.539939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.979 [2024-07-15 14:36:48.539960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.979 [2024-07-15 14:36:48.539971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.979 [2024-07-15 14:36:48.539987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.979 [2024-07-15 14:36:48.540002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.979 [2024-07-15 14:36:48.540010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.979 [2024-07-15 14:36:48.540019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.979 [2024-07-15 14:36:48.540033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.979 [2024-07-15 14:36:48.548141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.979 [2024-07-15 14:36:48.548242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.979 [2024-07-15 14:36:48.548262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.979 [2024-07-15 14:36:48.548273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.979 [2024-07-15 14:36:48.548289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.979 [2024-07-15 14:36:48.548303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.979 [2024-07-15 14:36:48.548312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.979 [2024-07-15 14:36:48.548321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.979 [2024-07-15 14:36:48.548336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.979 [2024-07-15 14:36:48.549907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.979 [2024-07-15 14:36:48.549990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.979 [2024-07-15 14:36:48.550010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.979 [2024-07-15 14:36:48.550021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.979 [2024-07-15 14:36:48.550038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.979 [2024-07-15 14:36:48.550052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.979 [2024-07-15 14:36:48.550061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.979 [2024-07-15 14:36:48.550070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.979 [2024-07-15 14:36:48.550084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.979 [2024-07-15 14:36:48.558211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:08.979 [2024-07-15 14:36:48.558298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.979 [2024-07-15 14:36:48.558318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:08.979 [2024-07-15 14:36:48.558342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:08.979 [2024-07-15 14:36:48.558359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:08.979 [2024-07-15 14:36:48.558385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:08.979 [2024-07-15 14:36:48.558394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:08.979 [2024-07-15 14:36:48.558403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:08.979 [2024-07-15 14:36:48.558418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.979 [2024-07-15 14:36:48.559961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:08.979 [2024-07-15 14:36:48.560051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.979 [2024-07-15 14:36:48.560071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:08.979 [2024-07-15 14:36:48.560082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:08.979 [2024-07-15 14:36:48.560099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:08.979 [2024-07-15 14:36:48.560113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:08.979 [2024-07-15 14:36:48.560122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:08.979 [2024-07-15 14:36:48.560131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:08.979 [2024-07-15 14:36:48.560145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.240 [2024-07-15 14:36:48.568268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.240 [2024-07-15 14:36:48.568354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.241 [2024-07-15 14:36:48.568374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.241 [2024-07-15 14:36:48.568386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.241 [2024-07-15 14:36:48.568402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.241 [2024-07-15 14:36:48.568416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.241 [2024-07-15 14:36:48.568425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.241 [2024-07-15 14:36:48.568434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.241 [2024-07-15 14:36:48.568448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.241 [2024-07-15 14:36:48.570019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.241 [2024-07-15 14:36:48.570102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.241 [2024-07-15 14:36:48.570122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.241 [2024-07-15 14:36:48.570133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.241 [2024-07-15 14:36:48.570149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.241 [2024-07-15 14:36:48.570163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.241 [2024-07-15 14:36:48.570172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.241 [2024-07-15 14:36:48.570181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.241 [2024-07-15 14:36:48.570195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.241 [2024-07-15 14:36:48.578329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.241 [2024-07-15 14:36:48.578415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.241 [2024-07-15 14:36:48.578436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.241 [2024-07-15 14:36:48.578447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.241 [2024-07-15 14:36:48.578464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.241 [2024-07-15 14:36:48.578479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.241 [2024-07-15 14:36:48.578487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.241 [2024-07-15 14:36:48.578497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.241 [2024-07-15 14:36:48.578511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.241 [2024-07-15 14:36:48.580086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.241 [2024-07-15 14:36:48.580184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.241 [2024-07-15 14:36:48.580204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.241 [2024-07-15 14:36:48.580215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.241 [2024-07-15 14:36:48.580231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.241 [2024-07-15 14:36:48.580245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.241 [2024-07-15 14:36:48.580254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.241 [2024-07-15 14:36:48.580262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.241 [2024-07-15 14:36:48.580276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.241 [2024-07-15 14:36:48.588384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.241 [2024-07-15 14:36:48.588507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.241 [2024-07-15 14:36:48.588527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.241 [2024-07-15 14:36:48.588538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.241 [2024-07-15 14:36:48.588555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.241 [2024-07-15 14:36:48.588569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.241 [2024-07-15 14:36:48.588578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.241 [2024-07-15 14:36:48.588587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.241 [2024-07-15 14:36:48.588602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.241 [2024-07-15 14:36:48.590153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.241 [2024-07-15 14:36:48.590237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.241 [2024-07-15 14:36:48.590257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.241 [2024-07-15 14:36:48.590267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.241 [2024-07-15 14:36:48.590283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.241 [2024-07-15 14:36:48.590297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.241 [2024-07-15 14:36:48.590306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.241 [2024-07-15 14:36:48.590315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.241 [2024-07-15 14:36:48.590342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.241 [2024-07-15 14:36:48.598458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.241 [2024-07-15 14:36:48.598552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.241 [2024-07-15 14:36:48.598573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.241 [2024-07-15 14:36:48.598584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.241 [2024-07-15 14:36:48.598600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.241 [2024-07-15 14:36:48.598625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.241 [2024-07-15 14:36:48.598636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.241 [2024-07-15 14:36:48.598645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.241 [2024-07-15 14:36:48.598660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.241 [2024-07-15 14:36:48.600207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.241 [2024-07-15 14:36:48.600314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.241 [2024-07-15 14:36:48.600336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.241 [2024-07-15 14:36:48.600347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.241 [2024-07-15 14:36:48.600363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.241 [2024-07-15 14:36:48.600377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.241 [2024-07-15 14:36:48.600386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.241 [2024-07-15 14:36:48.600395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.241 [2024-07-15 14:36:48.600409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.241 [2024-07-15 14:36:48.608516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.241 [2024-07-15 14:36:48.608612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.241 [2024-07-15 14:36:48.608633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.241 [2024-07-15 14:36:48.608643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.241 [2024-07-15 14:36:48.608660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.241 [2024-07-15 14:36:48.608674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.241 [2024-07-15 14:36:48.608683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.241 [2024-07-15 14:36:48.608693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.241 [2024-07-15 14:36:48.608723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.241 [2024-07-15 14:36:48.610280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.241 [2024-07-15 14:36:48.610379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.241 [2024-07-15 14:36:48.610401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.241 [2024-07-15 14:36:48.610411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.241 [2024-07-15 14:36:48.610428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.241 [2024-07-15 14:36:48.610442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.241 [2024-07-15 14:36:48.610451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.241 [2024-07-15 14:36:48.610460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.241 [2024-07-15 14:36:48.610475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.241 [2024-07-15 14:36:48.618582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.241 [2024-07-15 14:36:48.618679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.241 [2024-07-15 14:36:48.618712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.241 [2024-07-15 14:36:48.618726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.241 [2024-07-15 14:36:48.618753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.241 [2024-07-15 14:36:48.618769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.241 [2024-07-15 14:36:48.618778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.241 [2024-07-15 14:36:48.618787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.241 [2024-07-15 14:36:48.618802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.241 [2024-07-15 14:36:48.620335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.242 [2024-07-15 14:36:48.620419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.242 [2024-07-15 14:36:48.620439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.242 [2024-07-15 14:36:48.620450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.242 [2024-07-15 14:36:48.620466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.242 [2024-07-15 14:36:48.620481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.242 [2024-07-15 14:36:48.620489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.242 [2024-07-15 14:36:48.620498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.242 [2024-07-15 14:36:48.620512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.242 [2024-07-15 14:36:48.628640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.242 [2024-07-15 14:36:48.628734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.242 [2024-07-15 14:36:48.628756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.242 [2024-07-15 14:36:48.628768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.242 [2024-07-15 14:36:48.628785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.242 [2024-07-15 14:36:48.628799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.242 [2024-07-15 14:36:48.628808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.242 [2024-07-15 14:36:48.628817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.242 [2024-07-15 14:36:48.628831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.242 [2024-07-15 14:36:48.630392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.242 [2024-07-15 14:36:48.630477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.242 [2024-07-15 14:36:48.630498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.242 [2024-07-15 14:36:48.630509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.242 [2024-07-15 14:36:48.630525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.242 [2024-07-15 14:36:48.630539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.242 [2024-07-15 14:36:48.630548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.242 [2024-07-15 14:36:48.630557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.242 [2024-07-15 14:36:48.630571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.242 [2024-07-15 14:36:48.638702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.242 [2024-07-15 14:36:48.638808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.242 [2024-07-15 14:36:48.638828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.242 [2024-07-15 14:36:48.638839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.242 [2024-07-15 14:36:48.638855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.242 [2024-07-15 14:36:48.638869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.242 [2024-07-15 14:36:48.638878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.242 [2024-07-15 14:36:48.638887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.242 [2024-07-15 14:36:48.638902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.242 [2024-07-15 14:36:48.640445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.242 [2024-07-15 14:36:48.640558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.242 [2024-07-15 14:36:48.640578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.242 [2024-07-15 14:36:48.640589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.242 [2024-07-15 14:36:48.640605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.242 [2024-07-15 14:36:48.640620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.242 [2024-07-15 14:36:48.640628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.242 [2024-07-15 14:36:48.640637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.242 [2024-07-15 14:36:48.640651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.242 [2024-07-15 14:36:48.648778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.242 [2024-07-15 14:36:48.648862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.242 [2024-07-15 14:36:48.648883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.242 [2024-07-15 14:36:48.648894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.242 [2024-07-15 14:36:48.648911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.242 [2024-07-15 14:36:48.648925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.242 [2024-07-15 14:36:48.648934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.242 [2024-07-15 14:36:48.648943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.242 [2024-07-15 14:36:48.648958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.242 [2024-07-15 14:36:48.650514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.242 [2024-07-15 14:36:48.650599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.242 [2024-07-15 14:36:48.650619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.242 [2024-07-15 14:36:48.650630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.242 [2024-07-15 14:36:48.650646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.242 [2024-07-15 14:36:48.650670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.242 [2024-07-15 14:36:48.650681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.242 [2024-07-15 14:36:48.650690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.242 [2024-07-15 14:36:48.650719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.242 [2024-07-15 14:36:48.658832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.242 [2024-07-15 14:36:48.658932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.242 [2024-07-15 14:36:48.658953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.242 [2024-07-15 14:36:48.658963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.242 [2024-07-15 14:36:48.658980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.242 [2024-07-15 14:36:48.658994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.242 [2024-07-15 14:36:48.659003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.242 [2024-07-15 14:36:48.659012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.242 [2024-07-15 14:36:48.659026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.242 [2024-07-15 14:36:48.660569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.242 [2024-07-15 14:36:48.660654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.242 [2024-07-15 14:36:48.660674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.242 [2024-07-15 14:36:48.660685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.242 [2024-07-15 14:36:48.660713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.242 [2024-07-15 14:36:48.660730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.242 [2024-07-15 14:36:48.660739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.242 [2024-07-15 14:36:48.660747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.242 [2024-07-15 14:36:48.660762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.242 [2024-07-15 14:36:48.668902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.242 [2024-07-15 14:36:48.669006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.242 [2024-07-15 14:36:48.669026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.242 [2024-07-15 14:36:48.669037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.242 [2024-07-15 14:36:48.669053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.242 [2024-07-15 14:36:48.669068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.242 [2024-07-15 14:36:48.669076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.242 [2024-07-15 14:36:48.669085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.242 [2024-07-15 14:36:48.669100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.243 [2024-07-15 14:36:48.670624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.243 [2024-07-15 14:36:48.670720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.243 [2024-07-15 14:36:48.670742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.243 [2024-07-15 14:36:48.670753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.243 [2024-07-15 14:36:48.670779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.243 [2024-07-15 14:36:48.670795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.243 [2024-07-15 14:36:48.670804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.243 [2024-07-15 14:36:48.670813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.243 [2024-07-15 14:36:48.670827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.243 [2024-07-15 14:36:48.678974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.243 [2024-07-15 14:36:48.679090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.243 [2024-07-15 14:36:48.679110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.243 [2024-07-15 14:36:48.679121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.243 [2024-07-15 14:36:48.679138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.243 [2024-07-15 14:36:48.679152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.243 [2024-07-15 14:36:48.679161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.243 [2024-07-15 14:36:48.679169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.243 [2024-07-15 14:36:48.679184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.243 [2024-07-15 14:36:48.680678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.243 [2024-07-15 14:36:48.680775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.243 [2024-07-15 14:36:48.680795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.243 [2024-07-15 14:36:48.680806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.243 [2024-07-15 14:36:48.680822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.243 [2024-07-15 14:36:48.680837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.243 [2024-07-15 14:36:48.680846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.243 [2024-07-15 14:36:48.680854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.243 [2024-07-15 14:36:48.680869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.243 [2024-07-15 14:36:48.689047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.243 [2024-07-15 14:36:48.689176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.243 [2024-07-15 14:36:48.689199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.243 [2024-07-15 14:36:48.689209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.243 [2024-07-15 14:36:48.689226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.243 [2024-07-15 14:36:48.689241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.243 [2024-07-15 14:36:48.689249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.243 [2024-07-15 14:36:48.689258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.243 [2024-07-15 14:36:48.689273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.243 [2024-07-15 14:36:48.690773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.243 [2024-07-15 14:36:48.690875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.243 [2024-07-15 14:36:48.690895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.243 [2024-07-15 14:36:48.690906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.243 [2024-07-15 14:36:48.690922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.243 [2024-07-15 14:36:48.690936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.243 [2024-07-15 14:36:48.690945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.243 [2024-07-15 14:36:48.690954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.243 [2024-07-15 14:36:48.690969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.243 [2024-07-15 14:36:48.695556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.243 [2024-07-15 14:36:48.695608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d1410 with addr=10.0.0.3, port=8009 00:20:09.243 [2024-07-15 14:36:48.695642] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:09.243 [2024-07-15 14:36:48.695652] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:09.243 [2024-07-15 14:36:48.695661] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.3:8009] could not start discovery connect 00:20:09.243 [2024-07-15 14:36:48.695748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.243 [2024-07-15 14:36:48.695768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d1410 with addr=10.0.0.2, port=8009 00:20:09.243 [2024-07-15 14:36:48.695782] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:09.243 [2024-07-15 14:36:48.695791] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:09.243 [2024-07-15 14:36:48.695799] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8009] could not start discovery connect 00:20:09.243 [2024-07-15 14:36:48.699141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.243 [2024-07-15 14:36:48.699259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.243 [2024-07-15 14:36:48.699280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.243 [2024-07-15 14:36:48.699290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.243 [2024-07-15 14:36:48.699307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.243 [2024-07-15 14:36:48.699322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.243 [2024-07-15 14:36:48.699330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.243 [2024-07-15 14:36:48.699339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.243 [2024-07-15 14:36:48.699354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.243 [2024-07-15 14:36:48.700844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.243 [2024-07-15 14:36:48.700945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.244 [2024-07-15 14:36:48.700966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.244 [2024-07-15 14:36:48.700976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.244 [2024-07-15 14:36:48.700993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.244 [2024-07-15 14:36:48.701007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.244 [2024-07-15 14:36:48.701016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.244 [2024-07-15 14:36:48.701025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.244 [2024-07-15 14:36:48.701039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.244 [2024-07-15 14:36:48.709212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.244 [2024-07-15 14:36:48.709327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.244 [2024-07-15 14:36:48.709348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.244 [2024-07-15 14:36:48.709358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.244 [2024-07-15 14:36:48.709374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.244 [2024-07-15 14:36:48.709389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.244 [2024-07-15 14:36:48.709397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.244 [2024-07-15 14:36:48.709406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.244 [2024-07-15 14:36:48.709421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.244 [2024-07-15 14:36:48.710914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.244 [2024-07-15 14:36:48.711015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.244 [2024-07-15 14:36:48.711036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.244 [2024-07-15 14:36:48.711046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.244 [2024-07-15 14:36:48.711062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.244 [2024-07-15 14:36:48.711076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.244 [2024-07-15 14:36:48.711085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.244 [2024-07-15 14:36:48.711095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.244 [2024-07-15 14:36:48.711109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.244 [2024-07-15 14:36:48.719283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.244 [2024-07-15 14:36:48.719367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.244 [2024-07-15 14:36:48.719388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.244 [2024-07-15 14:36:48.719399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.244 [2024-07-15 14:36:48.719415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.244 [2024-07-15 14:36:48.719429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.244 [2024-07-15 14:36:48.719438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.244 [2024-07-15 14:36:48.719447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.244 [2024-07-15 14:36:48.719461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.244 [2024-07-15 14:36:48.720985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.244 [2024-07-15 14:36:48.721084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.244 [2024-07-15 14:36:48.721105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.244 [2024-07-15 14:36:48.721116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.244 [2024-07-15 14:36:48.721132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.244 [2024-07-15 14:36:48.721146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.244 [2024-07-15 14:36:48.721155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.244 [2024-07-15 14:36:48.721164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.244 [2024-07-15 14:36:48.721178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.244 [2024-07-15 14:36:48.729338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.244 [2024-07-15 14:36:48.729453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.244 [2024-07-15 14:36:48.729474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.244 [2024-07-15 14:36:48.729485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.244 [2024-07-15 14:36:48.729501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.244 [2024-07-15 14:36:48.729515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.244 [2024-07-15 14:36:48.729524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.244 [2024-07-15 14:36:48.729532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.244 [2024-07-15 14:36:48.729547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.244 [2024-07-15 14:36:48.731055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.244 [2024-07-15 14:36:48.731142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.244 [2024-07-15 14:36:48.731162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.244 [2024-07-15 14:36:48.731173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.244 [2024-07-15 14:36:48.731189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.245 [2024-07-15 14:36:48.731213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.245 [2024-07-15 14:36:48.731222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.245 [2024-07-15 14:36:48.731231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.245 [2024-07-15 14:36:48.731245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.245 [2024-07-15 14:36:48.739407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.245 [2024-07-15 14:36:48.739496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.245 [2024-07-15 14:36:48.739516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.245 [2024-07-15 14:36:48.739527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.245 [2024-07-15 14:36:48.739543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.245 [2024-07-15 14:36:48.739558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.245 [2024-07-15 14:36:48.739566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.245 [2024-07-15 14:36:48.739575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.245 [2024-07-15 14:36:48.739590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.245 [2024-07-15 14:36:48.741111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.245 [2024-07-15 14:36:48.741225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.245 [2024-07-15 14:36:48.741245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.245 [2024-07-15 14:36:48.741256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.245 [2024-07-15 14:36:48.741272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.245 [2024-07-15 14:36:48.741287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.245 [2024-07-15 14:36:48.741295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.245 [2024-07-15 14:36:48.741304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.245 [2024-07-15 14:36:48.741319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.245 [2024-07-15 14:36:48.749464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.245 [2024-07-15 14:36:48.749568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.245 [2024-07-15 14:36:48.749588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.245 [2024-07-15 14:36:48.749599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.245 [2024-07-15 14:36:48.749615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.245 [2024-07-15 14:36:48.749629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.245 [2024-07-15 14:36:48.749638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.245 [2024-07-15 14:36:48.749647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.245 [2024-07-15 14:36:48.749661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.245 [2024-07-15 14:36:48.751179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.245 [2024-07-15 14:36:48.751280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.245 [2024-07-15 14:36:48.751300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.245 [2024-07-15 14:36:48.751310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.245 [2024-07-15 14:36:48.751326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.245 [2024-07-15 14:36:48.751340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.245 [2024-07-15 14:36:48.751349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.245 [2024-07-15 14:36:48.751358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.245 [2024-07-15 14:36:48.751373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.245 [2024-07-15 14:36:48.759537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.245 [2024-07-15 14:36:48.759655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.245 [2024-07-15 14:36:48.759676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.245 [2024-07-15 14:36:48.759686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.245 [2024-07-15 14:36:48.759703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.245 [2024-07-15 14:36:48.759730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.245 [2024-07-15 14:36:48.759741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.245 [2024-07-15 14:36:48.759750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.245 [2024-07-15 14:36:48.759764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.245 [2024-07-15 14:36:48.761233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.245 [2024-07-15 14:36:48.761348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.245 [2024-07-15 14:36:48.761368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.245 [2024-07-15 14:36:48.761379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.245 [2024-07-15 14:36:48.761395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.245 [2024-07-15 14:36:48.761409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.245 [2024-07-15 14:36:48.761418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.245 [2024-07-15 14:36:48.761427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.245 [2024-07-15 14:36:48.761441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.245 [2024-07-15 14:36:48.769622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.245 [2024-07-15 14:36:48.769746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.245 [2024-07-15 14:36:48.769768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.245 [2024-07-15 14:36:48.769778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.245 [2024-07-15 14:36:48.769795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.245 [2024-07-15 14:36:48.769809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.245 [2024-07-15 14:36:48.769818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.245 [2024-07-15 14:36:48.769827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.245 [2024-07-15 14:36:48.769842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.246 [2024-07-15 14:36:48.771301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.246 [2024-07-15 14:36:48.771401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.246 [2024-07-15 14:36:48.771422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.246 [2024-07-15 14:36:48.771432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.246 [2024-07-15 14:36:48.771449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.246 [2024-07-15 14:36:48.771463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.246 [2024-07-15 14:36:48.771472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.246 [2024-07-15 14:36:48.771481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.246 [2024-07-15 14:36:48.771495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.246 [2024-07-15 14:36:48.779707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.246 [2024-07-15 14:36:48.779845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.246 [2024-07-15 14:36:48.779865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.246 [2024-07-15 14:36:48.779876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.246 [2024-07-15 14:36:48.779891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.246 [2024-07-15 14:36:48.779905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.246 [2024-07-15 14:36:48.779914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.246 [2024-07-15 14:36:48.779922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.246 [2024-07-15 14:36:48.779937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.246 [2024-07-15 14:36:48.781386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.246 [2024-07-15 14:36:48.781470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.246 [2024-07-15 14:36:48.781490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.246 [2024-07-15 14:36:48.781501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.246 [2024-07-15 14:36:48.781517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.246 [2024-07-15 14:36:48.781531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.246 [2024-07-15 14:36:48.781540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.246 [2024-07-15 14:36:48.781549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.246 [2024-07-15 14:36:48.781564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.246 [2024-07-15 14:36:48.789798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.246 [2024-07-15 14:36:48.789912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.246 [2024-07-15 14:36:48.789932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.246 [2024-07-15 14:36:48.789943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.246 [2024-07-15 14:36:48.789959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.246 [2024-07-15 14:36:48.789973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.246 [2024-07-15 14:36:48.789982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.246 [2024-07-15 14:36:48.789991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.246 [2024-07-15 14:36:48.790005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.246 [2024-07-15 14:36:48.791454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.246 [2024-07-15 14:36:48.791555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.246 [2024-07-15 14:36:48.791575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.246 [2024-07-15 14:36:48.791585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.246 [2024-07-15 14:36:48.791601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.246 [2024-07-15 14:36:48.791616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.246 [2024-07-15 14:36:48.791624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.246 [2024-07-15 14:36:48.791633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.246 [2024-07-15 14:36:48.791648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.246 [2024-07-15 14:36:48.799867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.246 [2024-07-15 14:36:48.799956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.246 [2024-07-15 14:36:48.799977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.246 [2024-07-15 14:36:48.799987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.246 [2024-07-15 14:36:48.800004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.246 [2024-07-15 14:36:48.800020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.246 [2024-07-15 14:36:48.800028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.246 [2024-07-15 14:36:48.800038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.246 [2024-07-15 14:36:48.800052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.246 [2024-07-15 14:36:48.801524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.246 [2024-07-15 14:36:48.801609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.246 [2024-07-15 14:36:48.801629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.246 [2024-07-15 14:36:48.801640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.246 [2024-07-15 14:36:48.801656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.246 [2024-07-15 14:36:48.801670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.246 [2024-07-15 14:36:48.801679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.246 [2024-07-15 14:36:48.801688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.246 [2024-07-15 14:36:48.801721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.246 [2024-07-15 14:36:48.809926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.246 [2024-07-15 14:36:48.810026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.246 [2024-07-15 14:36:48.810048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.246 [2024-07-15 14:36:48.810061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.246 [2024-07-15 14:36:48.810084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.246 [2024-07-15 14:36:48.810103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.246 [2024-07-15 14:36:48.810118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.246 [2024-07-15 14:36:48.810132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.246 [2024-07-15 14:36:48.810154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.246 [2024-07-15 14:36:48.811578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.246 [2024-07-15 14:36:48.811679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.246 [2024-07-15 14:36:48.811711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.246 [2024-07-15 14:36:48.811724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.246 [2024-07-15 14:36:48.811741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.246 [2024-07-15 14:36:48.811756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.247 [2024-07-15 14:36:48.811765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.247 [2024-07-15 14:36:48.811774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.247 [2024-07-15 14:36:48.811789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.247 [2024-07-15 14:36:48.819996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.247 [2024-07-15 14:36:48.820085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.247 [2024-07-15 14:36:48.820121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.247 [2024-07-15 14:36:48.820132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.247 [2024-07-15 14:36:48.820148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.247 [2024-07-15 14:36:48.820162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.247 [2024-07-15 14:36:48.820171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.247 [2024-07-15 14:36:48.820179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.247 [2024-07-15 14:36:48.820193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.247 [2024-07-15 14:36:48.821650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.247 [2024-07-15 14:36:48.821764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.247 [2024-07-15 14:36:48.821787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.247 [2024-07-15 14:36:48.821798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.247 [2024-07-15 14:36:48.821813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.247 [2024-07-15 14:36:48.821828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.247 [2024-07-15 14:36:48.821837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.247 [2024-07-15 14:36:48.821845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.247 [2024-07-15 14:36:48.821860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.247 [2024-07-15 14:36:48.830052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.247 [2024-07-15 14:36:48.830139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.247 [2024-07-15 14:36:48.830160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.247 [2024-07-15 14:36:48.830171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.247 [2024-07-15 14:36:48.830187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.247 [2024-07-15 14:36:48.830201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.247 [2024-07-15 14:36:48.830210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.247 [2024-07-15 14:36:48.830219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.247 [2024-07-15 14:36:48.830234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.247 [2024-07-15 14:36:48.831734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.247 [2024-07-15 14:36:48.831821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.247 [2024-07-15 14:36:48.831842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.247 [2024-07-15 14:36:48.831853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.247 [2024-07-15 14:36:48.831870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.247 [2024-07-15 14:36:48.831892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.247 [2024-07-15 14:36:48.831902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.247 [2024-07-15 14:36:48.831911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.247 [2024-07-15 14:36:48.831932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.508 [2024-07-15 14:36:48.840109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.508 [2024-07-15 14:36:48.840195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.508 [2024-07-15 14:36:48.840216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.508 [2024-07-15 14:36:48.840227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.508 [2024-07-15 14:36:48.840243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.508 [2024-07-15 14:36:48.840258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.508 [2024-07-15 14:36:48.840266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.508 [2024-07-15 14:36:48.840276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.508 [2024-07-15 14:36:48.840290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.508 [2024-07-15 14:36:48.841790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.508 [2024-07-15 14:36:48.841875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.508 [2024-07-15 14:36:48.841896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.508 [2024-07-15 14:36:48.841906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.508 [2024-07-15 14:36:48.841923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.508 [2024-07-15 14:36:48.841937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.508 [2024-07-15 14:36:48.841946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.508 [2024-07-15 14:36:48.841954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.508 [2024-07-15 14:36:48.841969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.508 [2024-07-15 14:36:48.850165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.508 [2024-07-15 14:36:48.850254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.508 [2024-07-15 14:36:48.850275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.508 [2024-07-15 14:36:48.850285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.508 [2024-07-15 14:36:48.850302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.508 [2024-07-15 14:36:48.850316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.508 [2024-07-15 14:36:48.850334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.508 [2024-07-15 14:36:48.850343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.508 [2024-07-15 14:36:48.850359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.508 [2024-07-15 14:36:48.851845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.508 [2024-07-15 14:36:48.851931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.508 [2024-07-15 14:36:48.851951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.508 [2024-07-15 14:36:48.851962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.508 [2024-07-15 14:36:48.851978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.508 [2024-07-15 14:36:48.851992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.508 [2024-07-15 14:36:48.852001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.508 [2024-07-15 14:36:48.852010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.508 [2024-07-15 14:36:48.852024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.508 [2024-07-15 14:36:48.860225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.508 [2024-07-15 14:36:48.860310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.508 [2024-07-15 14:36:48.860330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.508 [2024-07-15 14:36:48.860341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.508 [2024-07-15 14:36:48.860357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.508 [2024-07-15 14:36:48.860372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.508 [2024-07-15 14:36:48.860380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.508 [2024-07-15 14:36:48.860389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.508 [2024-07-15 14:36:48.860404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.508 [2024-07-15 14:36:48.861899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.508 [2024-07-15 14:36:48.861981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.509 [2024-07-15 14:36:48.862002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.509 [2024-07-15 14:36:48.862012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.509 [2024-07-15 14:36:48.862028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.509 [2024-07-15 14:36:48.862043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.509 [2024-07-15 14:36:48.862051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.509 [2024-07-15 14:36:48.862060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.509 [2024-07-15 14:36:48.862074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.509 [2024-07-15 14:36:48.870280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.509 [2024-07-15 14:36:48.870372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.509 [2024-07-15 14:36:48.870393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.509 [2024-07-15 14:36:48.870404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.509 [2024-07-15 14:36:48.870420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.509 [2024-07-15 14:36:48.870434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.509 [2024-07-15 14:36:48.870443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.509 [2024-07-15 14:36:48.870452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.509 [2024-07-15 14:36:48.870467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.509 [2024-07-15 14:36:48.871953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.509 [2024-07-15 14:36:48.872038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.509 [2024-07-15 14:36:48.872058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.509 [2024-07-15 14:36:48.872068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.509 [2024-07-15 14:36:48.872085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.509 [2024-07-15 14:36:48.872099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.509 [2024-07-15 14:36:48.872108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.509 [2024-07-15 14:36:48.872117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.509 [2024-07-15 14:36:48.872131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.509 [2024-07-15 14:36:48.880336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.509 [2024-07-15 14:36:48.880421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.509 [2024-07-15 14:36:48.880441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.509 [2024-07-15 14:36:48.880452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.509 [2024-07-15 14:36:48.880468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.509 [2024-07-15 14:36:48.880483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.509 [2024-07-15 14:36:48.880491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.509 [2024-07-15 14:36:48.880500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.509 [2024-07-15 14:36:48.880515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.509 [2024-07-15 14:36:48.882008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.509 [2024-07-15 14:36:48.882106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.509 [2024-07-15 14:36:48.882126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.509 [2024-07-15 14:36:48.882137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.509 [2024-07-15 14:36:48.882152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.509 [2024-07-15 14:36:48.882167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.509 [2024-07-15 14:36:48.882176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.509 [2024-07-15 14:36:48.882185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.509 [2024-07-15 14:36:48.882199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.509 [2024-07-15 14:36:48.890391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.509 [2024-07-15 14:36:48.890477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.509 [2024-07-15 14:36:48.890498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.509 [2024-07-15 14:36:48.890508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.509 [2024-07-15 14:36:48.890524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.509 [2024-07-15 14:36:48.890539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.509 [2024-07-15 14:36:48.890547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.509 [2024-07-15 14:36:48.890556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.509 [2024-07-15 14:36:48.890571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.509 [2024-07-15 14:36:48.892094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.509 [2024-07-15 14:36:48.892193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.509 [2024-07-15 14:36:48.892213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.509 [2024-07-15 14:36:48.892224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.509 [2024-07-15 14:36:48.892240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.509 [2024-07-15 14:36:48.892254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.509 [2024-07-15 14:36:48.892263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.509 [2024-07-15 14:36:48.892272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.509 [2024-07-15 14:36:48.892286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.509 [2024-07-15 14:36:48.900450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.509 [2024-07-15 14:36:48.900548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.509 [2024-07-15 14:36:48.900569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.509 [2024-07-15 14:36:48.900590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.509 [2024-07-15 14:36:48.900606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.509 [2024-07-15 14:36:48.900621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.509 [2024-07-15 14:36:48.900629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.509 [2024-07-15 14:36:48.900638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.509 [2024-07-15 14:36:48.900653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.509 [2024-07-15 14:36:48.902162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.509 [2024-07-15 14:36:48.902276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.509 [2024-07-15 14:36:48.902297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.509 [2024-07-15 14:36:48.902307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.509 [2024-07-15 14:36:48.902335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.509 [2024-07-15 14:36:48.902352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.509 [2024-07-15 14:36:48.902361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.509 [2024-07-15 14:36:48.902371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.509 [2024-07-15 14:36:48.902386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.509 [2024-07-15 14:36:48.910511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.509 [2024-07-15 14:36:48.910606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.509 [2024-07-15 14:36:48.910627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.509 [2024-07-15 14:36:48.910637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.509 [2024-07-15 14:36:48.910653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.509 [2024-07-15 14:36:48.910668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.509 [2024-07-15 14:36:48.910676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.509 [2024-07-15 14:36:48.910685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.509 [2024-07-15 14:36:48.910714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.509 [2024-07-15 14:36:48.912231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.509 [2024-07-15 14:36:48.912315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.509 [2024-07-15 14:36:48.912336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.509 [2024-07-15 14:36:48.912347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.509 [2024-07-15 14:36:48.912363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.509 [2024-07-15 14:36:48.912377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.509 [2024-07-15 14:36:48.912386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.509 [2024-07-15 14:36:48.912395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.509 [2024-07-15 14:36:48.912409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.509 [2024-07-15 14:36:48.920573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.509 [2024-07-15 14:36:48.920659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.510 [2024-07-15 14:36:48.920679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.510 [2024-07-15 14:36:48.920690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.510 [2024-07-15 14:36:48.920719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.510 [2024-07-15 14:36:48.920735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.510 [2024-07-15 14:36:48.920744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.510 [2024-07-15 14:36:48.920754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.510 [2024-07-15 14:36:48.920768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.510 [2024-07-15 14:36:48.922285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.510 [2024-07-15 14:36:48.922378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.510 [2024-07-15 14:36:48.922399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.510 [2024-07-15 14:36:48.922410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.510 [2024-07-15 14:36:48.922426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.510 [2024-07-15 14:36:48.922440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.510 [2024-07-15 14:36:48.922449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.510 [2024-07-15 14:36:48.922458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.510 [2024-07-15 14:36:48.922472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.510 [2024-07-15 14:36:48.930631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.510 [2024-07-15 14:36:48.930729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.510 [2024-07-15 14:36:48.930750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.510 [2024-07-15 14:36:48.930761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.510 [2024-07-15 14:36:48.930778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.510 [2024-07-15 14:36:48.930793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.510 [2024-07-15 14:36:48.930801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.510 [2024-07-15 14:36:48.930810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.510 [2024-07-15 14:36:48.930837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.510 [2024-07-15 14:36:48.932349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.510 [2024-07-15 14:36:48.932436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.510 [2024-07-15 14:36:48.932457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.510 [2024-07-15 14:36:48.932468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.510 [2024-07-15 14:36:48.932486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.510 [2024-07-15 14:36:48.932500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.510 [2024-07-15 14:36:48.932509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.510 [2024-07-15 14:36:48.932518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.510 [2024-07-15 14:36:48.932533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.510 [2024-07-15 14:36:48.940690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.510 [2024-07-15 14:36:48.940788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.510 [2024-07-15 14:36:48.940809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.510 [2024-07-15 14:36:48.940820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.510 [2024-07-15 14:36:48.940836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.510 [2024-07-15 14:36:48.940851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.510 [2024-07-15 14:36:48.940860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.510 [2024-07-15 14:36:48.940868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.510 [2024-07-15 14:36:48.940883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.510 [2024-07-15 14:36:48.942405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.510 [2024-07-15 14:36:48.942492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.510 [2024-07-15 14:36:48.942512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.510 [2024-07-15 14:36:48.942533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.510 [2024-07-15 14:36:48.942550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.510 [2024-07-15 14:36:48.942564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.510 [2024-07-15 14:36:48.942573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.510 [2024-07-15 14:36:48.942581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.510 [2024-07-15 14:36:48.942596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.510 [2024-07-15 14:36:48.950754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.510 [2024-07-15 14:36:48.950840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.510 [2024-07-15 14:36:48.950861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.510 [2024-07-15 14:36:48.950871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.510 [2024-07-15 14:36:48.950897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.510 [2024-07-15 14:36:48.950913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.510 [2024-07-15 14:36:48.950922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.510 [2024-07-15 14:36:48.950931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.510 [2024-07-15 14:36:48.950946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.510 [2024-07-15 14:36:48.952460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.510 [2024-07-15 14:36:48.952543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.510 [2024-07-15 14:36:48.952563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.510 [2024-07-15 14:36:48.952574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.510 [2024-07-15 14:36:48.952591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.510 [2024-07-15 14:36:48.952606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.510 [2024-07-15 14:36:48.952614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.510 [2024-07-15 14:36:48.952623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.510 [2024-07-15 14:36:48.952638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.510 [2024-07-15 14:36:48.960809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.510 [2024-07-15 14:36:48.960895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.510 [2024-07-15 14:36:48.960916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.510 [2024-07-15 14:36:48.960926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.510 [2024-07-15 14:36:48.960943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.510 [2024-07-15 14:36:48.960957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.510 [2024-07-15 14:36:48.960966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.510 [2024-07-15 14:36:48.960975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.510 [2024-07-15 14:36:48.960989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.510 [2024-07-15 14:36:48.962513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.510 [2024-07-15 14:36:48.962597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.510 [2024-07-15 14:36:48.962617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.510 [2024-07-15 14:36:48.962627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.510 [2024-07-15 14:36:48.962643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.510 [2024-07-15 14:36:48.962657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.510 [2024-07-15 14:36:48.962666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.510 [2024-07-15 14:36:48.962675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.510 [2024-07-15 14:36:48.962690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.510 [2024-07-15 14:36:48.970866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.510 [2024-07-15 14:36:48.970951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.510 [2024-07-15 14:36:48.970971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.510 [2024-07-15 14:36:48.970982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.510 [2024-07-15 14:36:48.970998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.510 [2024-07-15 14:36:48.971012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.510 [2024-07-15 14:36:48.971021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.510 [2024-07-15 14:36:48.971030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.510 [2024-07-15 14:36:48.971044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.510 [2024-07-15 14:36:48.972568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.510 [2024-07-15 14:36:48.972651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.510 [2024-07-15 14:36:48.972672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.510 [2024-07-15 14:36:48.972682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.510 [2024-07-15 14:36:48.972711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.510 [2024-07-15 14:36:48.972728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.511 [2024-07-15 14:36:48.972737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.511 [2024-07-15 14:36:48.972746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.511 [2024-07-15 14:36:48.972760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.511 [2024-07-15 14:36:48.980923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.511 [2024-07-15 14:36:48.981009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.511 [2024-07-15 14:36:48.981029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.511 [2024-07-15 14:36:48.981040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.511 [2024-07-15 14:36:48.981056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.511 [2024-07-15 14:36:48.981071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.511 [2024-07-15 14:36:48.981079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.511 [2024-07-15 14:36:48.981088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.511 [2024-07-15 14:36:48.981103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.511 [2024-07-15 14:36:48.982620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.511 [2024-07-15 14:36:48.982719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.511 [2024-07-15 14:36:48.982741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.511 [2024-07-15 14:36:48.982751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.511 [2024-07-15 14:36:48.982768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.511 [2024-07-15 14:36:48.982782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.511 [2024-07-15 14:36:48.982791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.511 [2024-07-15 14:36:48.982800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.511 [2024-07-15 14:36:48.982814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.511 [2024-07-15 14:36:48.990979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.511 [2024-07-15 14:36:48.991081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.511 [2024-07-15 14:36:48.991102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.511 [2024-07-15 14:36:48.991112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.511 [2024-07-15 14:36:48.991130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.511 [2024-07-15 14:36:48.991145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.511 [2024-07-15 14:36:48.991153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.511 [2024-07-15 14:36:48.991162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.511 [2024-07-15 14:36:48.991177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.511 [2024-07-15 14:36:48.992676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.511 [2024-07-15 14:36:48.992801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.511 [2024-07-15 14:36:48.992822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.511 [2024-07-15 14:36:48.992832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.511 [2024-07-15 14:36:48.992848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.511 [2024-07-15 14:36:48.992863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.511 [2024-07-15 14:36:48.992872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.511 [2024-07-15 14:36:48.992881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.511 [2024-07-15 14:36:48.992895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.511 [2024-07-15 14:36:49.001050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.511 [2024-07-15 14:36:49.001178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.511 [2024-07-15 14:36:49.001201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.511 [2024-07-15 14:36:49.001211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.511 [2024-07-15 14:36:49.001228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.511 [2024-07-15 14:36:49.001242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.511 [2024-07-15 14:36:49.001251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.511 [2024-07-15 14:36:49.001259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.511 [2024-07-15 14:36:49.001274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.511 [2024-07-15 14:36:49.002771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.511 [2024-07-15 14:36:49.002867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.511 [2024-07-15 14:36:49.002888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.511 [2024-07-15 14:36:49.002899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.511 [2024-07-15 14:36:49.002916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.511 [2024-07-15 14:36:49.002941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.511 [2024-07-15 14:36:49.002951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.511 [2024-07-15 14:36:49.002960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.511 [2024-07-15 14:36:49.002975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.511 [2024-07-15 14:36:49.011140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.511 [2024-07-15 14:36:49.011224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.511 [2024-07-15 14:36:49.011244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.511 [2024-07-15 14:36:49.011255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.511 [2024-07-15 14:36:49.011271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.511 [2024-07-15 14:36:49.011285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.511 [2024-07-15 14:36:49.011294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.511 [2024-07-15 14:36:49.011302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.511 [2024-07-15 14:36:49.011317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.511 [2024-07-15 14:36:49.012825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.511 [2024-07-15 14:36:49.012908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.511 [2024-07-15 14:36:49.012929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.511 [2024-07-15 14:36:49.012940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.511 [2024-07-15 14:36:49.012956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.511 [2024-07-15 14:36:49.012970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.511 [2024-07-15 14:36:49.012979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.511 [2024-07-15 14:36:49.012988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.511 [2024-07-15 14:36:49.013002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.511 [2024-07-15 14:36:49.021195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.511 [2024-07-15 14:36:49.021296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.511 [2024-07-15 14:36:49.021317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.511 [2024-07-15 14:36:49.021328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.511 [2024-07-15 14:36:49.021344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.511 [2024-07-15 14:36:49.021358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.511 [2024-07-15 14:36:49.021367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.511 [2024-07-15 14:36:49.021376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.511 [2024-07-15 14:36:49.021390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.511 [2024-07-15 14:36:49.022878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.511 [2024-07-15 14:36:49.022960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.511 [2024-07-15 14:36:49.022981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.511 [2024-07-15 14:36:49.022992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.511 [2024-07-15 14:36:49.023016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.511 [2024-07-15 14:36:49.023032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.511 [2024-07-15 14:36:49.023041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.511 [2024-07-15 14:36:49.023050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.511 [2024-07-15 14:36:49.023065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.511 [2024-07-15 14:36:49.031267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.511 [2024-07-15 14:36:49.031352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.511 [2024-07-15 14:36:49.031372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.511 [2024-07-15 14:36:49.031383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.511 [2024-07-15 14:36:49.031399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.511 [2024-07-15 14:36:49.031414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.511 [2024-07-15 14:36:49.031422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.511 [2024-07-15 14:36:49.031431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.511 [2024-07-15 14:36:49.031445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.511 [2024-07-15 14:36:49.032932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.511 [2024-07-15 14:36:49.033014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.511 [2024-07-15 14:36:49.033035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.512 [2024-07-15 14:36:49.033046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.512 [2024-07-15 14:36:49.033062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.512 [2024-07-15 14:36:49.033076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.512 [2024-07-15 14:36:49.033085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.512 [2024-07-15 14:36:49.033094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.512 [2024-07-15 14:36:49.033108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.512 [2024-07-15 14:36:49.041321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.512 [2024-07-15 14:36:49.041423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.512 [2024-07-15 14:36:49.041444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.512 [2024-07-15 14:36:49.041456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.512 [2024-07-15 14:36:49.041472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.512 [2024-07-15 14:36:49.041486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.512 [2024-07-15 14:36:49.041495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.512 [2024-07-15 14:36:49.041504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.512 [2024-07-15 14:36:49.041519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.512 [2024-07-15 14:36:49.042982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.512 [2024-07-15 14:36:49.043069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.512 [2024-07-15 14:36:49.043090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.512 [2024-07-15 14:36:49.043100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.512 [2024-07-15 14:36:49.043116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.512 [2024-07-15 14:36:49.043131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.512 [2024-07-15 14:36:49.043140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.512 [2024-07-15 14:36:49.043149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.512 [2024-07-15 14:36:49.043163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.512 [2024-07-15 14:36:49.051392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.512 [2024-07-15 14:36:49.051478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.512 [2024-07-15 14:36:49.051498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.512 [2024-07-15 14:36:49.051509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.512 [2024-07-15 14:36:49.051525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.512 [2024-07-15 14:36:49.051540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.512 [2024-07-15 14:36:49.051549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.512 [2024-07-15 14:36:49.051557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.512 [2024-07-15 14:36:49.051572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.512 [2024-07-15 14:36:49.053038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.512 [2024-07-15 14:36:49.053121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.512 [2024-07-15 14:36:49.053142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.512 [2024-07-15 14:36:49.053153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.512 [2024-07-15 14:36:49.053168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.512 [2024-07-15 14:36:49.053183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.512 [2024-07-15 14:36:49.053192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.512 [2024-07-15 14:36:49.053200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.512 [2024-07-15 14:36:49.053215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.512 [2024-07-15 14:36:49.061447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.512 [2024-07-15 14:36:49.061533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.512 [2024-07-15 14:36:49.061554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.512 [2024-07-15 14:36:49.061564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.512 [2024-07-15 14:36:49.061580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.512 [2024-07-15 14:36:49.061595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.512 [2024-07-15 14:36:49.061604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.512 [2024-07-15 14:36:49.061612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.512 [2024-07-15 14:36:49.061627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.512 [2024-07-15 14:36:49.063092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.512 [2024-07-15 14:36:49.063177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.512 [2024-07-15 14:36:49.063198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.512 [2024-07-15 14:36:49.063208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.512 [2024-07-15 14:36:49.063224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.512 [2024-07-15 14:36:49.063238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.512 [2024-07-15 14:36:49.063247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.512 [2024-07-15 14:36:49.063256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.512 [2024-07-15 14:36:49.063270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.512 [2024-07-15 14:36:49.071501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.512 [2024-07-15 14:36:49.071603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.512 [2024-07-15 14:36:49.071623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.512 [2024-07-15 14:36:49.071634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.512 [2024-07-15 14:36:49.071649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.512 [2024-07-15 14:36:49.071664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.512 [2024-07-15 14:36:49.071673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.512 [2024-07-15 14:36:49.071682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.512 [2024-07-15 14:36:49.071711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.512 [2024-07-15 14:36:49.073146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.513 [2024-07-15 14:36:49.073244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.513 [2024-07-15 14:36:49.073265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.513 [2024-07-15 14:36:49.073275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.513 [2024-07-15 14:36:49.073292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.513 [2024-07-15 14:36:49.073306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.513 [2024-07-15 14:36:49.073315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.513 [2024-07-15 14:36:49.073324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.513 [2024-07-15 14:36:49.073338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.513 [2024-07-15 14:36:49.081573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.513 [2024-07-15 14:36:49.081659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.513 [2024-07-15 14:36:49.081680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.513 [2024-07-15 14:36:49.081691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.513 [2024-07-15 14:36:49.081719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.513 [2024-07-15 14:36:49.081735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.513 [2024-07-15 14:36:49.081744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.513 [2024-07-15 14:36:49.081753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.513 [2024-07-15 14:36:49.081767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.513 [2024-07-15 14:36:49.083214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.513 [2024-07-15 14:36:49.083301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.513 [2024-07-15 14:36:49.083321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.513 [2024-07-15 14:36:49.083331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.513 [2024-07-15 14:36:49.083348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.513 [2024-07-15 14:36:49.083362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.513 [2024-07-15 14:36:49.083371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.513 [2024-07-15 14:36:49.083380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.513 [2024-07-15 14:36:49.083394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.513 [2024-07-15 14:36:49.091630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.513 [2024-07-15 14:36:49.091741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.513 [2024-07-15 14:36:49.091763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.513 [2024-07-15 14:36:49.091774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.513 [2024-07-15 14:36:49.091791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.513 [2024-07-15 14:36:49.091806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.513 [2024-07-15 14:36:49.091815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.513 [2024-07-15 14:36:49.091824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.513 [2024-07-15 14:36:49.091838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.513 [2024-07-15 14:36:49.093271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.513 [2024-07-15 14:36:49.093355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.513 [2024-07-15 14:36:49.093376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.513 [2024-07-15 14:36:49.093387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.513 [2024-07-15 14:36:49.093402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.513 [2024-07-15 14:36:49.093417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.513 [2024-07-15 14:36:49.093426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.513 [2024-07-15 14:36:49.093435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.513 [2024-07-15 14:36:49.093449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.775 [2024-07-15 14:36:49.101707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.775 [2024-07-15 14:36:49.101792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.775 [2024-07-15 14:36:49.101813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.775 [2024-07-15 14:36:49.101824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.775 [2024-07-15 14:36:49.101840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.775 [2024-07-15 14:36:49.101854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.775 [2024-07-15 14:36:49.101863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.775 [2024-07-15 14:36:49.101872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.775 [2024-07-15 14:36:49.101886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.775 [2024-07-15 14:36:49.103325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.775 [2024-07-15 14:36:49.103412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.775 [2024-07-15 14:36:49.103432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.775 [2024-07-15 14:36:49.103443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.775 [2024-07-15 14:36:49.103459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.776 [2024-07-15 14:36:49.103473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.776 [2024-07-15 14:36:49.103482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.776 [2024-07-15 14:36:49.103491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.776 [2024-07-15 14:36:49.103505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.776 [2024-07-15 14:36:49.111764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.776 [2024-07-15 14:36:49.111850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.776 [2024-07-15 14:36:49.111870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.776 [2024-07-15 14:36:49.111880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.776 [2024-07-15 14:36:49.111897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.776 [2024-07-15 14:36:49.111911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.776 [2024-07-15 14:36:49.111920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.776 [2024-07-15 14:36:49.111928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.776 [2024-07-15 14:36:49.111943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.776 [2024-07-15 14:36:49.113383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.776 [2024-07-15 14:36:49.113467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.776 [2024-07-15 14:36:49.113487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.776 [2024-07-15 14:36:49.113497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.776 [2024-07-15 14:36:49.113514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.776 [2024-07-15 14:36:49.113529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.776 [2024-07-15 14:36:49.113538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.776 [2024-07-15 14:36:49.113547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.776 [2024-07-15 14:36:49.113561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.776 [2024-07-15 14:36:49.121819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.776 [2024-07-15 14:36:49.121918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.776 [2024-07-15 14:36:49.121938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.776 [2024-07-15 14:36:49.121949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.776 [2024-07-15 14:36:49.121965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.776 [2024-07-15 14:36:49.121979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.776 [2024-07-15 14:36:49.121988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.776 [2024-07-15 14:36:49.121997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.776 [2024-07-15 14:36:49.122011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.776 [2024-07-15 14:36:49.123452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.776 [2024-07-15 14:36:49.123557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.776 [2024-07-15 14:36:49.123577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.776 [2024-07-15 14:36:49.123587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.776 [2024-07-15 14:36:49.123603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.776 [2024-07-15 14:36:49.123617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.776 [2024-07-15 14:36:49.123626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.776 [2024-07-15 14:36:49.123635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.776 [2024-07-15 14:36:49.123649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.776 [2024-07-15 14:36:49.131894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.776 [2024-07-15 14:36:49.131999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.776 [2024-07-15 14:36:49.132020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.776 [2024-07-15 14:36:49.132030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.776 [2024-07-15 14:36:49.132046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.776 [2024-07-15 14:36:49.132060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.776 [2024-07-15 14:36:49.132069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.776 [2024-07-15 14:36:49.132078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.776 [2024-07-15 14:36:49.132092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.776 [2024-07-15 14:36:49.133523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.776 [2024-07-15 14:36:49.133610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.776 [2024-07-15 14:36:49.133631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.776 [2024-07-15 14:36:49.133641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.776 [2024-07-15 14:36:49.133657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.776 [2024-07-15 14:36:49.133671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.776 [2024-07-15 14:36:49.133680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.776 [2024-07-15 14:36:49.133689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.776 [2024-07-15 14:36:49.133717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.776 [2024-07-15 14:36:49.141966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.776 [2024-07-15 14:36:49.142068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.776 [2024-07-15 14:36:49.142088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.776 [2024-07-15 14:36:49.142099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.776 [2024-07-15 14:36:49.142115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.776 [2024-07-15 14:36:49.142130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.776 [2024-07-15 14:36:49.142138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.776 [2024-07-15 14:36:49.142147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.776 [2024-07-15 14:36:49.142162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.776 [2024-07-15 14:36:49.143579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.776 [2024-07-15 14:36:49.143683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.776 [2024-07-15 14:36:49.143719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.776 [2024-07-15 14:36:49.143732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.776 [2024-07-15 14:36:49.143749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.776 [2024-07-15 14:36:49.143763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.776 [2024-07-15 14:36:49.143773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.776 [2024-07-15 14:36:49.143782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.776 [2024-07-15 14:36:49.143797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.776 [2024-07-15 14:36:49.152038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.776 [2024-07-15 14:36:49.152124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.776 [2024-07-15 14:36:49.152145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.776 [2024-07-15 14:36:49.152156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.776 [2024-07-15 14:36:49.152172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.776 [2024-07-15 14:36:49.152186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.776 [2024-07-15 14:36:49.152194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.776 [2024-07-15 14:36:49.152203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.776 [2024-07-15 14:36:49.152217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.776 [2024-07-15 14:36:49.153651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.776 [2024-07-15 14:36:49.153744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.776 [2024-07-15 14:36:49.153765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.776 [2024-07-15 14:36:49.153776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.776 [2024-07-15 14:36:49.153793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.776 [2024-07-15 14:36:49.153806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.776 [2024-07-15 14:36:49.153815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.776 [2024-07-15 14:36:49.153824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.776 [2024-07-15 14:36:49.153839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.776 [2024-07-15 14:36:49.162094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.776 [2024-07-15 14:36:49.162180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.776 [2024-07-15 14:36:49.162201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.777 [2024-07-15 14:36:49.162211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.777 [2024-07-15 14:36:49.162227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.777 [2024-07-15 14:36:49.162242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.777 [2024-07-15 14:36:49.162251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.777 [2024-07-15 14:36:49.162260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.777 [2024-07-15 14:36:49.162274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.777 [2024-07-15 14:36:49.163705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.777 [2024-07-15 14:36:49.163815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.777 [2024-07-15 14:36:49.163835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.777 [2024-07-15 14:36:49.163846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.777 [2024-07-15 14:36:49.163863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.777 [2024-07-15 14:36:49.163877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.777 [2024-07-15 14:36:49.163886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.777 [2024-07-15 14:36:49.163895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.777 [2024-07-15 14:36:49.163909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.777 [2024-07-15 14:36:49.172151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.777 [2024-07-15 14:36:49.172235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.777 [2024-07-15 14:36:49.172255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.777 [2024-07-15 14:36:49.172266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.777 [2024-07-15 14:36:49.172282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.777 [2024-07-15 14:36:49.172296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.777 [2024-07-15 14:36:49.172305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.777 [2024-07-15 14:36:49.172314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.777 [2024-07-15 14:36:49.172329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.777 [2024-07-15 14:36:49.173786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.777 [2024-07-15 14:36:49.173871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.777 [2024-07-15 14:36:49.173891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.777 [2024-07-15 14:36:49.173902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.777 [2024-07-15 14:36:49.173918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.777 [2024-07-15 14:36:49.173932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.777 [2024-07-15 14:36:49.173941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.777 [2024-07-15 14:36:49.173950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.777 [2024-07-15 14:36:49.173965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.777 [2024-07-15 14:36:49.182207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.777 [2024-07-15 14:36:49.182291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.777 [2024-07-15 14:36:49.182312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.777 [2024-07-15 14:36:49.182333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.777 [2024-07-15 14:36:49.182353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.777 [2024-07-15 14:36:49.182368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.777 [2024-07-15 14:36:49.182376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.777 [2024-07-15 14:36:49.182385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.777 [2024-07-15 14:36:49.182400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.777 [2024-07-15 14:36:49.183840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.777 [2024-07-15 14:36:49.183924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.777 [2024-07-15 14:36:49.183944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.777 [2024-07-15 14:36:49.183955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.777 [2024-07-15 14:36:49.183971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.777 [2024-07-15 14:36:49.183985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.777 [2024-07-15 14:36:49.183994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.777 [2024-07-15 14:36:49.184003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.777 [2024-07-15 14:36:49.184017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.777 [2024-07-15 14:36:49.192263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.777 [2024-07-15 14:36:49.192347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.777 [2024-07-15 14:36:49.192368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.777 [2024-07-15 14:36:49.192378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.777 [2024-07-15 14:36:49.192395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.777 [2024-07-15 14:36:49.192410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.777 [2024-07-15 14:36:49.192418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.777 [2024-07-15 14:36:49.192427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.777 [2024-07-15 14:36:49.192442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.777 [2024-07-15 14:36:49.193895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.777 [2024-07-15 14:36:49.193979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.777 [2024-07-15 14:36:49.193999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.777 [2024-07-15 14:36:49.194010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.777 [2024-07-15 14:36:49.194026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.777 [2024-07-15 14:36:49.194041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.777 [2024-07-15 14:36:49.194050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.777 [2024-07-15 14:36:49.194058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.777 [2024-07-15 14:36:49.194072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.777 [2024-07-15 14:36:49.202318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.777 [2024-07-15 14:36:49.202414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.777 [2024-07-15 14:36:49.202435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.777 [2024-07-15 14:36:49.202445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.777 [2024-07-15 14:36:49.202462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.777 [2024-07-15 14:36:49.202476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.777 [2024-07-15 14:36:49.202485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.777 [2024-07-15 14:36:49.202493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.777 [2024-07-15 14:36:49.202508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.777 [2024-07-15 14:36:49.203949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.777 [2024-07-15 14:36:49.204027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.777 [2024-07-15 14:36:49.204047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.777 [2024-07-15 14:36:49.204058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.777 [2024-07-15 14:36:49.204073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.777 [2024-07-15 14:36:49.204088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.777 [2024-07-15 14:36:49.204097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.777 [2024-07-15 14:36:49.204105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.777 [2024-07-15 14:36:49.204120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.777 [2024-07-15 14:36:49.212383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.777 [2024-07-15 14:36:49.212476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.777 [2024-07-15 14:36:49.212497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.777 [2024-07-15 14:36:49.212508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.777 [2024-07-15 14:36:49.212525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.777 [2024-07-15 14:36:49.212539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.777 [2024-07-15 14:36:49.212548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.777 [2024-07-15 14:36:49.212557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.777 [2024-07-15 14:36:49.212571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.777 [2024-07-15 14:36:49.213999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.777 [2024-07-15 14:36:49.214084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.778 [2024-07-15 14:36:49.214105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.778 [2024-07-15 14:36:49.214116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.778 [2024-07-15 14:36:49.214133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.778 [2024-07-15 14:36:49.214147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.778 [2024-07-15 14:36:49.214156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.778 [2024-07-15 14:36:49.214165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.778 [2024-07-15 14:36:49.214179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.778 [2024-07-15 14:36:49.222442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.778 [2024-07-15 14:36:49.222535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.778 [2024-07-15 14:36:49.222557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.778 [2024-07-15 14:36:49.222567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.778 [2024-07-15 14:36:49.222584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.778 [2024-07-15 14:36:49.222598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.778 [2024-07-15 14:36:49.222607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.778 [2024-07-15 14:36:49.222616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.778 [2024-07-15 14:36:49.222630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.778 [2024-07-15 14:36:49.224072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.778 [2024-07-15 14:36:49.224157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.778 [2024-07-15 14:36:49.224183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.778 [2024-07-15 14:36:49.224193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.778 [2024-07-15 14:36:49.224209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.778 [2024-07-15 14:36:49.224223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.778 [2024-07-15 14:36:49.224233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.778 [2024-07-15 14:36:49.224241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.778 [2024-07-15 14:36:49.224255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.778 [2024-07-15 14:36:49.232501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.778 [2024-07-15 14:36:49.232589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.778 [2024-07-15 14:36:49.232610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.778 [2024-07-15 14:36:49.232621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.778 [2024-07-15 14:36:49.232639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.778 [2024-07-15 14:36:49.232653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.778 [2024-07-15 14:36:49.232662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.778 [2024-07-15 14:36:49.232671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.778 [2024-07-15 14:36:49.232686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.778 [2024-07-15 14:36:49.234126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.778 [2024-07-15 14:36:49.234211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.778 [2024-07-15 14:36:49.234231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.778 [2024-07-15 14:36:49.234242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.778 [2024-07-15 14:36:49.234258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.778 [2024-07-15 14:36:49.234272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.778 [2024-07-15 14:36:49.234281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.778 [2024-07-15 14:36:49.234290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.778 [2024-07-15 14:36:49.234305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.778 [2024-07-15 14:36:49.242557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.778 [2024-07-15 14:36:49.242645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.778 [2024-07-15 14:36:49.242665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.778 [2024-07-15 14:36:49.242676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.778 [2024-07-15 14:36:49.242692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.778 [2024-07-15 14:36:49.242720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.778 [2024-07-15 14:36:49.242730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.778 [2024-07-15 14:36:49.242739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.778 [2024-07-15 14:36:49.242754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.778 [2024-07-15 14:36:49.244181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.778 [2024-07-15 14:36:49.244281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.778 [2024-07-15 14:36:49.244302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.778 [2024-07-15 14:36:49.244312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.778 [2024-07-15 14:36:49.244329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.778 [2024-07-15 14:36:49.244343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.778 [2024-07-15 14:36:49.244352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.778 [2024-07-15 14:36:49.244361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.778 [2024-07-15 14:36:49.244375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.778 [2024-07-15 14:36:49.252614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.778 [2024-07-15 14:36:49.252757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.778 [2024-07-15 14:36:49.252779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.778 [2024-07-15 14:36:49.252790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.778 [2024-07-15 14:36:49.252806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.778 [2024-07-15 14:36:49.252820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.778 [2024-07-15 14:36:49.252829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.778 [2024-07-15 14:36:49.252839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.778 [2024-07-15 14:36:49.252853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.778 [2024-07-15 14:36:49.254234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.778 [2024-07-15 14:36:49.254374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.778 [2024-07-15 14:36:49.254395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.778 [2024-07-15 14:36:49.254407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.778 [2024-07-15 14:36:49.254424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.778 [2024-07-15 14:36:49.254437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.778 [2024-07-15 14:36:49.254447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.778 [2024-07-15 14:36:49.254456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.778 [2024-07-15 14:36:49.254470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.778 [2024-07-15 14:36:49.262701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.778 [2024-07-15 14:36:49.262840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.778 [2024-07-15 14:36:49.262861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.778 [2024-07-15 14:36:49.262872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.778 [2024-07-15 14:36:49.262889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.778 [2024-07-15 14:36:49.262903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.778 [2024-07-15 14:36:49.262911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.778 [2024-07-15 14:36:49.262920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.778 [2024-07-15 14:36:49.262935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.778 [2024-07-15 14:36:49.264319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.778 [2024-07-15 14:36:49.264404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.778 [2024-07-15 14:36:49.264424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.778 [2024-07-15 14:36:49.264435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.778 [2024-07-15 14:36:49.264451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.778 [2024-07-15 14:36:49.264465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.778 [2024-07-15 14:36:49.264474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.778 [2024-07-15 14:36:49.264483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.778 [2024-07-15 14:36:49.264497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.779 [2024-07-15 14:36:49.272793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.779 [2024-07-15 14:36:49.272887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.779 [2024-07-15 14:36:49.272907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.779 [2024-07-15 14:36:49.272918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.779 [2024-07-15 14:36:49.272934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.779 [2024-07-15 14:36:49.272949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.779 [2024-07-15 14:36:49.272958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.779 [2024-07-15 14:36:49.272967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.779 [2024-07-15 14:36:49.272981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.779 [2024-07-15 14:36:49.274373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.779 [2024-07-15 14:36:49.274456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.779 [2024-07-15 14:36:49.274477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.779 [2024-07-15 14:36:49.274487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.779 [2024-07-15 14:36:49.274503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.779 [2024-07-15 14:36:49.274517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.779 [2024-07-15 14:36:49.274526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.779 [2024-07-15 14:36:49.274535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.779 [2024-07-15 14:36:49.274552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.779 [2024-07-15 14:36:49.282855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.779 [2024-07-15 14:36:49.282968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.779 [2024-07-15 14:36:49.282988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.779 [2024-07-15 14:36:49.282999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.779 [2024-07-15 14:36:49.283015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.779 [2024-07-15 14:36:49.283041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.779 [2024-07-15 14:36:49.283052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.779 [2024-07-15 14:36:49.283061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.779 [2024-07-15 14:36:49.283076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.779 [2024-07-15 14:36:49.284427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.779 [2024-07-15 14:36:49.284512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.779 [2024-07-15 14:36:49.284532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.779 [2024-07-15 14:36:49.284543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.779 [2024-07-15 14:36:49.284559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.779 [2024-07-15 14:36:49.284573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.779 [2024-07-15 14:36:49.284582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.779 [2024-07-15 14:36:49.284591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.779 [2024-07-15 14:36:49.284605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.779 [2024-07-15 14:36:49.292923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.779 [2024-07-15 14:36:49.293009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.779 [2024-07-15 14:36:49.293030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.779 [2024-07-15 14:36:49.293040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.779 [2024-07-15 14:36:49.293056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.779 [2024-07-15 14:36:49.293070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.779 [2024-07-15 14:36:49.293079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.779 [2024-07-15 14:36:49.293088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.779 [2024-07-15 14:36:49.293102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.779 [2024-07-15 14:36:49.294481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.779 [2024-07-15 14:36:49.294565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.779 [2024-07-15 14:36:49.294586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.779 [2024-07-15 14:36:49.294596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.779 [2024-07-15 14:36:49.294612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.779 [2024-07-15 14:36:49.294626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.779 [2024-07-15 14:36:49.294635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.779 [2024-07-15 14:36:49.294644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.779 [2024-07-15 14:36:49.294658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.779 [2024-07-15 14:36:49.302979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.779 [2024-07-15 14:36:49.303064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.779 [2024-07-15 14:36:49.303084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.779 [2024-07-15 14:36:49.303095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.779 [2024-07-15 14:36:49.303120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.779 [2024-07-15 14:36:49.303136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.779 [2024-07-15 14:36:49.303145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.779 [2024-07-15 14:36:49.303154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.779 [2024-07-15 14:36:49.303168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.779 [2024-07-15 14:36:49.304536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.779 [2024-07-15 14:36:49.304621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.779 [2024-07-15 14:36:49.304641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.779 [2024-07-15 14:36:49.304652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.779 [2024-07-15 14:36:49.304670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.779 [2024-07-15 14:36:49.304684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.779 [2024-07-15 14:36:49.304693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.779 [2024-07-15 14:36:49.304720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.779 [2024-07-15 14:36:49.304736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.779 [2024-07-15 14:36:49.313036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.779 [2024-07-15 14:36:49.313168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.779 [2024-07-15 14:36:49.313190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.779 [2024-07-15 14:36:49.313201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.779 [2024-07-15 14:36:49.313217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.779 [2024-07-15 14:36:49.313232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.779 [2024-07-15 14:36:49.313240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.779 [2024-07-15 14:36:49.313250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.779 [2024-07-15 14:36:49.313265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.779 [2024-07-15 14:36:49.314592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.779 [2024-07-15 14:36:49.314681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.779 [2024-07-15 14:36:49.314719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.779 [2024-07-15 14:36:49.314732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.779 [2024-07-15 14:36:49.314749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.779 [2024-07-15 14:36:49.314763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.779 [2024-07-15 14:36:49.314772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.779 [2024-07-15 14:36:49.314781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.779 [2024-07-15 14:36:49.314795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.779 [2024-07-15 14:36:49.323129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.779 [2024-07-15 14:36:49.323232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.779 [2024-07-15 14:36:49.323252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.779 [2024-07-15 14:36:49.323264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.779 [2024-07-15 14:36:49.323280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.779 [2024-07-15 14:36:49.323295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.779 [2024-07-15 14:36:49.323304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.779 [2024-07-15 14:36:49.323312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.780 [2024-07-15 14:36:49.323327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.780 [2024-07-15 14:36:49.324650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.780 [2024-07-15 14:36:49.324750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.780 [2024-07-15 14:36:49.324772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.780 [2024-07-15 14:36:49.324783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.780 [2024-07-15 14:36:49.324799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.780 [2024-07-15 14:36:49.324814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.780 [2024-07-15 14:36:49.324823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.780 [2024-07-15 14:36:49.324832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.780 [2024-07-15 14:36:49.324846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.780 [2024-07-15 14:36:49.333202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.780 [2024-07-15 14:36:49.333292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.780 [2024-07-15 14:36:49.333312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.780 [2024-07-15 14:36:49.333323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.780 [2024-07-15 14:36:49.333339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.780 [2024-07-15 14:36:49.333354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.780 [2024-07-15 14:36:49.333362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.780 [2024-07-15 14:36:49.333371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.780 [2024-07-15 14:36:49.333386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.780 [2024-07-15 14:36:49.334744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.780 [2024-07-15 14:36:49.334863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.780 [2024-07-15 14:36:49.334883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.780 [2024-07-15 14:36:49.334894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.780 [2024-07-15 14:36:49.334910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.780 [2024-07-15 14:36:49.334925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.780 [2024-07-15 14:36:49.334934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.780 [2024-07-15 14:36:49.334942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.780 [2024-07-15 14:36:49.334957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.780 [2024-07-15 14:36:49.343259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.780 [2024-07-15 14:36:49.343375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.780 [2024-07-15 14:36:49.343396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.780 [2024-07-15 14:36:49.343406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.780 [2024-07-15 14:36:49.343422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.780 [2024-07-15 14:36:49.343437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.780 [2024-07-15 14:36:49.343446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.780 [2024-07-15 14:36:49.343455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.780 [2024-07-15 14:36:49.343469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.780 [2024-07-15 14:36:49.344838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.780 [2024-07-15 14:36:49.344921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.780 [2024-07-15 14:36:49.344942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.780 [2024-07-15 14:36:49.344953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.780 [2024-07-15 14:36:49.344968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.780 [2024-07-15 14:36:49.344983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.780 [2024-07-15 14:36:49.344992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.780 [2024-07-15 14:36:49.345000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.780 [2024-07-15 14:36:49.345015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.780 [2024-07-15 14:36:49.353331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.780 [2024-07-15 14:36:49.353418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.780 [2024-07-15 14:36:49.353438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.780 [2024-07-15 14:36:49.353449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.780 [2024-07-15 14:36:49.353466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.780 [2024-07-15 14:36:49.353480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.780 [2024-07-15 14:36:49.353489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.780 [2024-07-15 14:36:49.353498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.780 [2024-07-15 14:36:49.353512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.780 [2024-07-15 14:36:49.354892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.780 [2024-07-15 14:36:49.354976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.780 [2024-07-15 14:36:49.354997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.780 [2024-07-15 14:36:49.355007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.780 [2024-07-15 14:36:49.355024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.780 [2024-07-15 14:36:49.355038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.780 [2024-07-15 14:36:49.355047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.780 [2024-07-15 14:36:49.355056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.780 [2024-07-15 14:36:49.355070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.780 [2024-07-15 14:36:49.363389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:09.780 [2024-07-15 14:36:49.363474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.780 [2024-07-15 14:36:49.363495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:09.780 [2024-07-15 14:36:49.363505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:09.780 [2024-07-15 14:36:49.363522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:09.780 [2024-07-15 14:36:49.363536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:09.780 [2024-07-15 14:36:49.363545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:09.780 [2024-07-15 14:36:49.363554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:09.780 [2024-07-15 14:36:49.363569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.780 [2024-07-15 14:36:49.364946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:09.780 [2024-07-15 14:36:49.365032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.780 [2024-07-15 14:36:49.365052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:09.780 [2024-07-15 14:36:49.365063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:09.780 [2024-07-15 14:36:49.365079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:09.780 [2024-07-15 14:36:49.365094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:09.780 [2024-07-15 14:36:49.365103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:09.780 [2024-07-15 14:36:49.365111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:09.781 [2024-07-15 14:36:49.365126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.042 [2024-07-15 14:36:49.373446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.042 [2024-07-15 14:36:49.373530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.042 [2024-07-15 14:36:49.373550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.042 [2024-07-15 14:36:49.373561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.042 [2024-07-15 14:36:49.373577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.042 [2024-07-15 14:36:49.373592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.042 [2024-07-15 14:36:49.373601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.042 [2024-07-15 14:36:49.373610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.042 [2024-07-15 14:36:49.373625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.042 [2024-07-15 14:36:49.375002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.042 [2024-07-15 14:36:49.375100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.042 [2024-07-15 14:36:49.375121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.042 [2024-07-15 14:36:49.375132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.042 [2024-07-15 14:36:49.375148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.042 [2024-07-15 14:36:49.375172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.042 [2024-07-15 14:36:49.375183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.042 [2024-07-15 14:36:49.375192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.042 [2024-07-15 14:36:49.375206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.042 [2024-07-15 14:36:49.383500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.042 [2024-07-15 14:36:49.383616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.042 [2024-07-15 14:36:49.383636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.042 [2024-07-15 14:36:49.383647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.042 [2024-07-15 14:36:49.383663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.042 [2024-07-15 14:36:49.383677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.042 [2024-07-15 14:36:49.383686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.042 [2024-07-15 14:36:49.383695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.042 [2024-07-15 14:36:49.383722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.042 [2024-07-15 14:36:49.385081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.042 [2024-07-15 14:36:49.385195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.042 [2024-07-15 14:36:49.385215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.042 [2024-07-15 14:36:49.385226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.042 [2024-07-15 14:36:49.385242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.042 [2024-07-15 14:36:49.385256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.042 [2024-07-15 14:36:49.385265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.042 [2024-07-15 14:36:49.385274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.042 [2024-07-15 14:36:49.385289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.042 [2024-07-15 14:36:49.393586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.042 [2024-07-15 14:36:49.393672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.042 [2024-07-15 14:36:49.393692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.042 [2024-07-15 14:36:49.393716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.042 [2024-07-15 14:36:49.393733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.042 [2024-07-15 14:36:49.393748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.042 [2024-07-15 14:36:49.393757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.042 [2024-07-15 14:36:49.393766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.042 [2024-07-15 14:36:49.393781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.042 [2024-07-15 14:36:49.395176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.042 [2024-07-15 14:36:49.395260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.042 [2024-07-15 14:36:49.395280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.042 [2024-07-15 14:36:49.395291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.043 [2024-07-15 14:36:49.395307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.043 [2024-07-15 14:36:49.395321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.043 [2024-07-15 14:36:49.395330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.043 [2024-07-15 14:36:49.395339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.043 [2024-07-15 14:36:49.395353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.043 [2024-07-15 14:36:49.403643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.043 [2024-07-15 14:36:49.403740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.043 [2024-07-15 14:36:49.403761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.043 [2024-07-15 14:36:49.403772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.043 [2024-07-15 14:36:49.403789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.043 [2024-07-15 14:36:49.403803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.043 [2024-07-15 14:36:49.403812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.043 [2024-07-15 14:36:49.403821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.043 [2024-07-15 14:36:49.403836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.043 [2024-07-15 14:36:49.405230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.043 [2024-07-15 14:36:49.405315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.043 [2024-07-15 14:36:49.405336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.043 [2024-07-15 14:36:49.405346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.043 [2024-07-15 14:36:49.405362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.043 [2024-07-15 14:36:49.405377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.043 [2024-07-15 14:36:49.405385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.043 [2024-07-15 14:36:49.405395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.043 [2024-07-15 14:36:49.405409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.043 [2024-07-15 14:36:49.413699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.043 [2024-07-15 14:36:49.413853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.043 [2024-07-15 14:36:49.413874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.043 [2024-07-15 14:36:49.413885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.043 [2024-07-15 14:36:49.413902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.043 [2024-07-15 14:36:49.413917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.043 [2024-07-15 14:36:49.413926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.043 [2024-07-15 14:36:49.413934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.043 [2024-07-15 14:36:49.413949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.043 [2024-07-15 14:36:49.415285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.043 [2024-07-15 14:36:49.415368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.043 [2024-07-15 14:36:49.415388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.043 [2024-07-15 14:36:49.415398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.043 [2024-07-15 14:36:49.415415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.043 [2024-07-15 14:36:49.415430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.043 [2024-07-15 14:36:49.415438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.043 [2024-07-15 14:36:49.415447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.043 [2024-07-15 14:36:49.415462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.043 [2024-07-15 14:36:49.423823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.043 [2024-07-15 14:36:49.423930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.043 [2024-07-15 14:36:49.423951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.043 [2024-07-15 14:36:49.423962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.043 [2024-07-15 14:36:49.423978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.043 [2024-07-15 14:36:49.423993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.043 [2024-07-15 14:36:49.424001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.043 [2024-07-15 14:36:49.424010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.043 [2024-07-15 14:36:49.424025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.043 [2024-07-15 14:36:49.425357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.043 [2024-07-15 14:36:49.425459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.043 [2024-07-15 14:36:49.425480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.043 [2024-07-15 14:36:49.425491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.043 [2024-07-15 14:36:49.425507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.043 [2024-07-15 14:36:49.425521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.043 [2024-07-15 14:36:49.425530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.043 [2024-07-15 14:36:49.425539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.043 [2024-07-15 14:36:49.425553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.043 [2024-07-15 14:36:49.433896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.043 [2024-07-15 14:36:49.434003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.043 [2024-07-15 14:36:49.434023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.043 [2024-07-15 14:36:49.434034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.043 [2024-07-15 14:36:49.434067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.043 [2024-07-15 14:36:49.434081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.043 [2024-07-15 14:36:49.434090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.043 [2024-07-15 14:36:49.434099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.043 [2024-07-15 14:36:49.434113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.043 [2024-07-15 14:36:49.435428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.043 [2024-07-15 14:36:49.435557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.043 [2024-07-15 14:36:49.435576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.043 [2024-07-15 14:36:49.435586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.043 [2024-07-15 14:36:49.435601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.043 [2024-07-15 14:36:49.435614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.043 [2024-07-15 14:36:49.435622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.043 [2024-07-15 14:36:49.435631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.043 [2024-07-15 14:36:49.435644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.043 [2024-07-15 14:36:49.443971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.043 [2024-07-15 14:36:49.444073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.043 [2024-07-15 14:36:49.444093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.043 [2024-07-15 14:36:49.444103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.043 [2024-07-15 14:36:49.444119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.043 [2024-07-15 14:36:49.444134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.043 [2024-07-15 14:36:49.444142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.043 [2024-07-15 14:36:49.444151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.043 [2024-07-15 14:36:49.444166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.043 [2024-07-15 14:36:49.445529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.043 [2024-07-15 14:36:49.445629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.043 [2024-07-15 14:36:49.445649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.043 [2024-07-15 14:36:49.445659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.043 [2024-07-15 14:36:49.445675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.043 [2024-07-15 14:36:49.445689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.043 [2024-07-15 14:36:49.445711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.043 [2024-07-15 14:36:49.445721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.043 [2024-07-15 14:36:49.445736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.043 [2024-07-15 14:36:49.454042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.043 [2024-07-15 14:36:49.454142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.044 [2024-07-15 14:36:49.454162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.044 [2024-07-15 14:36:49.454172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.044 [2024-07-15 14:36:49.454188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.044 [2024-07-15 14:36:49.454202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.044 [2024-07-15 14:36:49.454210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.044 [2024-07-15 14:36:49.454218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.044 [2024-07-15 14:36:49.454232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.044 [2024-07-15 14:36:49.455598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.044 [2024-07-15 14:36:49.455684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.044 [2024-07-15 14:36:49.455736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.044 [2024-07-15 14:36:49.455750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.044 [2024-07-15 14:36:49.455772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.044 [2024-07-15 14:36:49.455787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.044 [2024-07-15 14:36:49.455795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.044 [2024-07-15 14:36:49.455804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.044 [2024-07-15 14:36:49.455819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.044 [2024-07-15 14:36:49.464115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.044 [2024-07-15 14:36:49.464203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.044 [2024-07-15 14:36:49.464224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.044 [2024-07-15 14:36:49.464235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.044 [2024-07-15 14:36:49.464252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.044 [2024-07-15 14:36:49.464266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.044 [2024-07-15 14:36:49.464275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.044 [2024-07-15 14:36:49.464284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.044 [2024-07-15 14:36:49.464298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.044 [2024-07-15 14:36:49.465656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.044 [2024-07-15 14:36:49.465752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.044 [2024-07-15 14:36:49.465773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.044 [2024-07-15 14:36:49.465784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.044 [2024-07-15 14:36:49.465800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.044 [2024-07-15 14:36:49.465814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.044 [2024-07-15 14:36:49.465823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.044 [2024-07-15 14:36:49.465832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.044 [2024-07-15 14:36:49.465847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.044 [2024-07-15 14:36:49.474172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.044 [2024-07-15 14:36:49.474259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.044 [2024-07-15 14:36:49.474280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.044 [2024-07-15 14:36:49.474290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.044 [2024-07-15 14:36:49.474306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.044 [2024-07-15 14:36:49.474321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.044 [2024-07-15 14:36:49.474339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.044 [2024-07-15 14:36:49.474349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.044 [2024-07-15 14:36:49.474364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.044 [2024-07-15 14:36:49.475718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.044 [2024-07-15 14:36:49.475812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.044 [2024-07-15 14:36:49.475833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.044 [2024-07-15 14:36:49.475843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.044 [2024-07-15 14:36:49.475860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.044 [2024-07-15 14:36:49.475874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.044 [2024-07-15 14:36:49.475883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.044 [2024-07-15 14:36:49.475892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.044 [2024-07-15 14:36:49.475907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.044 [2024-07-15 14:36:49.484228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.044 [2024-07-15 14:36:49.484314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.044 [2024-07-15 14:36:49.484341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.044 [2024-07-15 14:36:49.484351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.044 [2024-07-15 14:36:49.484367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.044 [2024-07-15 14:36:49.484382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.044 [2024-07-15 14:36:49.484390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.044 [2024-07-15 14:36:49.484400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.044 [2024-07-15 14:36:49.484415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.044 [2024-07-15 14:36:49.485772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.044 [2024-07-15 14:36:49.485856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.044 [2024-07-15 14:36:49.485877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.044 [2024-07-15 14:36:49.485887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.044 [2024-07-15 14:36:49.485903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.044 [2024-07-15 14:36:49.485918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.044 [2024-07-15 14:36:49.485927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.044 [2024-07-15 14:36:49.485935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.044 [2024-07-15 14:36:49.485950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.044 [2024-07-15 14:36:49.494284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.044 [2024-07-15 14:36:49.494382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.044 [2024-07-15 14:36:49.494404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.044 [2024-07-15 14:36:49.494414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.044 [2024-07-15 14:36:49.494431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.044 [2024-07-15 14:36:49.494445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.044 [2024-07-15 14:36:49.494454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.044 [2024-07-15 14:36:49.494463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.044 [2024-07-15 14:36:49.494479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.044 [2024-07-15 14:36:49.495826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.044 [2024-07-15 14:36:49.495917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.044 [2024-07-15 14:36:49.495937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.044 [2024-07-15 14:36:49.495948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.044 [2024-07-15 14:36:49.495964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.044 [2024-07-15 14:36:49.495978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.044 [2024-07-15 14:36:49.495986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.044 [2024-07-15 14:36:49.495995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.044 [2024-07-15 14:36:49.496010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.044 [2024-07-15 14:36:49.504349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.044 [2024-07-15 14:36:49.504438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.044 [2024-07-15 14:36:49.504460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.044 [2024-07-15 14:36:49.504470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.044 [2024-07-15 14:36:49.504488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.044 [2024-07-15 14:36:49.504502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.044 [2024-07-15 14:36:49.504511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.044 [2024-07-15 14:36:49.504519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.044 [2024-07-15 14:36:49.504534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.044 [2024-07-15 14:36:49.505880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.044 [2024-07-15 14:36:49.505965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.045 [2024-07-15 14:36:49.505985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.045 [2024-07-15 14:36:49.505995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.045 [2024-07-15 14:36:49.506013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.045 [2024-07-15 14:36:49.506027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.045 [2024-07-15 14:36:49.506036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.045 [2024-07-15 14:36:49.506045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.045 [2024-07-15 14:36:49.506059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.045 [2024-07-15 14:36:49.514405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.045 [2024-07-15 14:36:49.514492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.045 [2024-07-15 14:36:49.514512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.045 [2024-07-15 14:36:49.514523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.045 [2024-07-15 14:36:49.514539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.045 [2024-07-15 14:36:49.514553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.045 [2024-07-15 14:36:49.514562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.045 [2024-07-15 14:36:49.514570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.045 [2024-07-15 14:36:49.514585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.045 [2024-07-15 14:36:49.515933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.045 [2024-07-15 14:36:49.516017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.045 [2024-07-15 14:36:49.516036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.045 [2024-07-15 14:36:49.516047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.045 [2024-07-15 14:36:49.516063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.045 [2024-07-15 14:36:49.516078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.045 [2024-07-15 14:36:49.516086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.045 [2024-07-15 14:36:49.516102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.045 [2024-07-15 14:36:49.516116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.045 [2024-07-15 14:36:49.524465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.045 [2024-07-15 14:36:49.524566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.045 [2024-07-15 14:36:49.524587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.045 [2024-07-15 14:36:49.524598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.045 [2024-07-15 14:36:49.524615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.045 [2024-07-15 14:36:49.524629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.045 [2024-07-15 14:36:49.524638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.045 [2024-07-15 14:36:49.524647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.045 [2024-07-15 14:36:49.524661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.045 [2024-07-15 14:36:49.525989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.045 [2024-07-15 14:36:49.526075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.045 [2024-07-15 14:36:49.526095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.045 [2024-07-15 14:36:49.526106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.045 [2024-07-15 14:36:49.526122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.045 [2024-07-15 14:36:49.526136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.045 [2024-07-15 14:36:49.526145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.045 [2024-07-15 14:36:49.526154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.045 [2024-07-15 14:36:49.526171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.045 [2024-07-15 14:36:49.534525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.045 [2024-07-15 14:36:49.534618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.045 [2024-07-15 14:36:49.534639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.045 [2024-07-15 14:36:49.534650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.045 [2024-07-15 14:36:49.534667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.045 [2024-07-15 14:36:49.534681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.045 [2024-07-15 14:36:49.534689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.045 [2024-07-15 14:36:49.534713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.045 [2024-07-15 14:36:49.534730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.045 [2024-07-15 14:36:49.536045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.045 [2024-07-15 14:36:49.536130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.045 [2024-07-15 14:36:49.536150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.045 [2024-07-15 14:36:49.536160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.045 [2024-07-15 14:36:49.536177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.045 [2024-07-15 14:36:49.536199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.045 [2024-07-15 14:36:49.536207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.045 [2024-07-15 14:36:49.536216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.045 [2024-07-15 14:36:49.536231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.045 [2024-07-15 14:36:49.544585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.045 [2024-07-15 14:36:49.544684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.045 [2024-07-15 14:36:49.544717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.045 [2024-07-15 14:36:49.544729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.045 [2024-07-15 14:36:49.544745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.045 [2024-07-15 14:36:49.544760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.045 [2024-07-15 14:36:49.544769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.045 [2024-07-15 14:36:49.544778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.045 [2024-07-15 14:36:49.544792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.045 [2024-07-15 14:36:49.546098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.045 [2024-07-15 14:36:49.546184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.045 [2024-07-15 14:36:49.546205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.045 [2024-07-15 14:36:49.546216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.045 [2024-07-15 14:36:49.546232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.045 [2024-07-15 14:36:49.546247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.045 [2024-07-15 14:36:49.546255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.045 [2024-07-15 14:36:49.546264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.045 [2024-07-15 14:36:49.546279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.045 [2024-07-15 14:36:49.554646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.045 [2024-07-15 14:36:49.554744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.045 [2024-07-15 14:36:49.554773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.045 [2024-07-15 14:36:49.554783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.045 [2024-07-15 14:36:49.554800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.045 [2024-07-15 14:36:49.554814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.045 [2024-07-15 14:36:49.554823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.045 [2024-07-15 14:36:49.554832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.045 [2024-07-15 14:36:49.554847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.045 [2024-07-15 14:36:49.556153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.045 [2024-07-15 14:36:49.556237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.045 [2024-07-15 14:36:49.556258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.045 [2024-07-15 14:36:49.556268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.045 [2024-07-15 14:36:49.556284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.045 [2024-07-15 14:36:49.556298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.045 [2024-07-15 14:36:49.556307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.045 [2024-07-15 14:36:49.556315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.045 [2024-07-15 14:36:49.556334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.045 [2024-07-15 14:36:49.564704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.045 [2024-07-15 14:36:49.564796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.046 [2024-07-15 14:36:49.564817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.046 [2024-07-15 14:36:49.564827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.046 [2024-07-15 14:36:49.564844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.046 [2024-07-15 14:36:49.564858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.046 [2024-07-15 14:36:49.564866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.046 [2024-07-15 14:36:49.564875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.046 [2024-07-15 14:36:49.564890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.046 [2024-07-15 14:36:49.566207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.046 [2024-07-15 14:36:49.566307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.046 [2024-07-15 14:36:49.566339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.046 [2024-07-15 14:36:49.566352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.046 [2024-07-15 14:36:49.566368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.046 [2024-07-15 14:36:49.566382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.046 [2024-07-15 14:36:49.566391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.046 [2024-07-15 14:36:49.566400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.046 [2024-07-15 14:36:49.566414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.046 [2024-07-15 14:36:49.574766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.046 [2024-07-15 14:36:49.574851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.046 [2024-07-15 14:36:49.574872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.046 [2024-07-15 14:36:49.574883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.046 [2024-07-15 14:36:49.574899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.046 [2024-07-15 14:36:49.574914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.046 [2024-07-15 14:36:49.574922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.046 [2024-07-15 14:36:49.574931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.046 [2024-07-15 14:36:49.574946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.046 [2024-07-15 14:36:49.576277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.046 [2024-07-15 14:36:49.576364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.046 [2024-07-15 14:36:49.576384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.046 [2024-07-15 14:36:49.576395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.046 [2024-07-15 14:36:49.576411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.046 [2024-07-15 14:36:49.576425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.046 [2024-07-15 14:36:49.576434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.046 [2024-07-15 14:36:49.576443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.046 [2024-07-15 14:36:49.576458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.046 [2024-07-15 14:36:49.584821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.046 [2024-07-15 14:36:49.584923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.046 [2024-07-15 14:36:49.584945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.046 [2024-07-15 14:36:49.584955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.046 [2024-07-15 14:36:49.584971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.046 [2024-07-15 14:36:49.584986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.046 [2024-07-15 14:36:49.584994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.046 [2024-07-15 14:36:49.585003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.046 [2024-07-15 14:36:49.585017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.046 [2024-07-15 14:36:49.586358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.046 [2024-07-15 14:36:49.586448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.046 [2024-07-15 14:36:49.586469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.046 [2024-07-15 14:36:49.586480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.046 [2024-07-15 14:36:49.586496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.046 [2024-07-15 14:36:49.586510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.046 [2024-07-15 14:36:49.586519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.046 [2024-07-15 14:36:49.586528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.046 [2024-07-15 14:36:49.586542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.046 [2024-07-15 14:36:49.594876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.046 [2024-07-15 14:36:49.594977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.046 [2024-07-15 14:36:49.594998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.046 [2024-07-15 14:36:49.595009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.046 [2024-07-15 14:36:49.595025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.046 [2024-07-15 14:36:49.595039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.046 [2024-07-15 14:36:49.595048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.046 [2024-07-15 14:36:49.595057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.046 [2024-07-15 14:36:49.595071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.046 [2024-07-15 14:36:49.596420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.046 [2024-07-15 14:36:49.596511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.046 [2024-07-15 14:36:49.596531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.046 [2024-07-15 14:36:49.596542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.046 [2024-07-15 14:36:49.596558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.046 [2024-07-15 14:36:49.596572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.046 [2024-07-15 14:36:49.596581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.046 [2024-07-15 14:36:49.596590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.046 [2024-07-15 14:36:49.596604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.046 [2024-07-15 14:36:49.604947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.046 [2024-07-15 14:36:49.605065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.046 [2024-07-15 14:36:49.605086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.046 [2024-07-15 14:36:49.605096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.046 [2024-07-15 14:36:49.605112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.046 [2024-07-15 14:36:49.605126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.046 [2024-07-15 14:36:49.605135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.046 [2024-07-15 14:36:49.605144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.046 [2024-07-15 14:36:49.605158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.046 [2024-07-15 14:36:49.606476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.046 [2024-07-15 14:36:49.606561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.047 [2024-07-15 14:36:49.606581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.047 [2024-07-15 14:36:49.606592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.047 [2024-07-15 14:36:49.606608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.047 [2024-07-15 14:36:49.606622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.047 [2024-07-15 14:36:49.606631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.047 [2024-07-15 14:36:49.606640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.047 [2024-07-15 14:36:49.606654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.047 [2024-07-15 14:36:49.615019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.047 [2024-07-15 14:36:49.615121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.047 [2024-07-15 14:36:49.615146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.047 [2024-07-15 14:36:49.615157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.047 [2024-07-15 14:36:49.615175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.047 [2024-07-15 14:36:49.615190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.047 [2024-07-15 14:36:49.615199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.047 [2024-07-15 14:36:49.615208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.047 [2024-07-15 14:36:49.615223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.047 [2024-07-15 14:36:49.616531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.047 [2024-07-15 14:36:49.616616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.047 [2024-07-15 14:36:49.616636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.047 [2024-07-15 14:36:49.616647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.047 [2024-07-15 14:36:49.616664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.047 [2024-07-15 14:36:49.616679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.047 [2024-07-15 14:36:49.616688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.047 [2024-07-15 14:36:49.616709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.047 [2024-07-15 14:36:49.616726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.047 [2024-07-15 14:36:49.625077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.047 [2024-07-15 14:36:49.625231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.047 [2024-07-15 14:36:49.625254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.047 [2024-07-15 14:36:49.625265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.047 [2024-07-15 14:36:49.625282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.047 [2024-07-15 14:36:49.625296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.047 [2024-07-15 14:36:49.625305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.047 [2024-07-15 14:36:49.625314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.047 [2024-07-15 14:36:49.625329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.047 [2024-07-15 14:36:49.626588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.047 [2024-07-15 14:36:49.626676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.047 [2024-07-15 14:36:49.626714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.047 [2024-07-15 14:36:49.626727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.047 [2024-07-15 14:36:49.626745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.047 [2024-07-15 14:36:49.626759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.047 [2024-07-15 14:36:49.626768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.047 [2024-07-15 14:36:49.626778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.047 [2024-07-15 14:36:49.626792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.308 [2024-07-15 14:36:49.635183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.308 [2024-07-15 14:36:49.635276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.308 [2024-07-15 14:36:49.635298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.308 [2024-07-15 14:36:49.635308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.308 [2024-07-15 14:36:49.635335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.308 [2024-07-15 14:36:49.635351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.308 [2024-07-15 14:36:49.635360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.308 [2024-07-15 14:36:49.635369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.308 [2024-07-15 14:36:49.635384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.308 [2024-07-15 14:36:49.636643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.308 [2024-07-15 14:36:49.636738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.308 [2024-07-15 14:36:49.636759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.308 [2024-07-15 14:36:49.636770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.308 [2024-07-15 14:36:49.636787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.308 [2024-07-15 14:36:49.636801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.308 [2024-07-15 14:36:49.636810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.308 [2024-07-15 14:36:49.636819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.308 [2024-07-15 14:36:49.636834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.308 [2024-07-15 14:36:49.645245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.308 [2024-07-15 14:36:49.645331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.308 [2024-07-15 14:36:49.645352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.308 [2024-07-15 14:36:49.645362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.309 [2024-07-15 14:36:49.645378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.309 [2024-07-15 14:36:49.645393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.309 [2024-07-15 14:36:49.645402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.309 [2024-07-15 14:36:49.645411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.309 [2024-07-15 14:36:49.645426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.309 [2024-07-15 14:36:49.646706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.309 [2024-07-15 14:36:49.646790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.309 [2024-07-15 14:36:49.646811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.309 [2024-07-15 14:36:49.646821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.309 [2024-07-15 14:36:49.646837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.309 [2024-07-15 14:36:49.646852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.309 [2024-07-15 14:36:49.646863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.309 [2024-07-15 14:36:49.646872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.309 [2024-07-15 14:36:49.646886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.309 [2024-07-15 14:36:49.655301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.309 [2024-07-15 14:36:49.655387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.309 [2024-07-15 14:36:49.655407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.309 [2024-07-15 14:36:49.655418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.309 [2024-07-15 14:36:49.655435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.309 [2024-07-15 14:36:49.655450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.309 [2024-07-15 14:36:49.655458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.309 [2024-07-15 14:36:49.655467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.309 [2024-07-15 14:36:49.655482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.309 [2024-07-15 14:36:49.656760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.309 [2024-07-15 14:36:49.656843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.309 [2024-07-15 14:36:49.656864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.309 [2024-07-15 14:36:49.656874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.309 [2024-07-15 14:36:49.656890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.309 [2024-07-15 14:36:49.656905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.309 [2024-07-15 14:36:49.656913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.309 [2024-07-15 14:36:49.656923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.309 [2024-07-15 14:36:49.656937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.309 [2024-07-15 14:36:49.665356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.309 [2024-07-15 14:36:49.665441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.309 [2024-07-15 14:36:49.665462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.309 [2024-07-15 14:36:49.665472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.309 [2024-07-15 14:36:49.665489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.309 [2024-07-15 14:36:49.665503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.309 [2024-07-15 14:36:49.665512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.309 [2024-07-15 14:36:49.665521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.309 [2024-07-15 14:36:49.665535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.309 [2024-07-15 14:36:49.666814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.309 [2024-07-15 14:36:49.666898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.309 [2024-07-15 14:36:49.666918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.309 [2024-07-15 14:36:49.666929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.309 [2024-07-15 14:36:49.666945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.309 [2024-07-15 14:36:49.666959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.309 [2024-07-15 14:36:49.666968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.309 [2024-07-15 14:36:49.666977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.309 [2024-07-15 14:36:49.666991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.309 [2024-07-15 14:36:49.675412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.309 [2024-07-15 14:36:49.675513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.309 [2024-07-15 14:36:49.675533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.309 [2024-07-15 14:36:49.675544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.309 [2024-07-15 14:36:49.675560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.309 [2024-07-15 14:36:49.675575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.309 [2024-07-15 14:36:49.675584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.309 [2024-07-15 14:36:49.675593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.309 [2024-07-15 14:36:49.675607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.309 [2024-07-15 14:36:49.676867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.309 [2024-07-15 14:36:49.676966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.309 [2024-07-15 14:36:49.676986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.309 [2024-07-15 14:36:49.676997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.309 [2024-07-15 14:36:49.677012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.309 [2024-07-15 14:36:49.677027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.309 [2024-07-15 14:36:49.677036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.309 [2024-07-15 14:36:49.677045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.309 [2024-07-15 14:36:49.677059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.309 [2024-07-15 14:36:49.685497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.309 [2024-07-15 14:36:49.685596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.309 [2024-07-15 14:36:49.685617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.309 [2024-07-15 14:36:49.685628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.309 [2024-07-15 14:36:49.685644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.309 [2024-07-15 14:36:49.685658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.309 [2024-07-15 14:36:49.685667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.309 [2024-07-15 14:36:49.685675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.309 [2024-07-15 14:36:49.685690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.309 [2024-07-15 14:36:49.686936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.309 [2024-07-15 14:36:49.687036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.309 [2024-07-15 14:36:49.687057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.309 [2024-07-15 14:36:49.687068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.309 [2024-07-15 14:36:49.687084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.309 [2024-07-15 14:36:49.687099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.309 [2024-07-15 14:36:49.687108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.309 [2024-07-15 14:36:49.687117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.309 [2024-07-15 14:36:49.687131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.309 [2024-07-15 14:36:49.695558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.309 [2024-07-15 14:36:49.695612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d1410 with addr=10.0.0.3, port=8009 00:20:10.309 [2024-07-15 14:36:49.695646] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:10.309 [2024-07-15 14:36:49.695655] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:10.309 [2024-07-15 14:36:49.695665] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.3:8009] could not start discovery connect 00:20:10.309 [2024-07-15 14:36:49.695763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.309 [2024-07-15 14:36:49.695784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d1410 with addr=10.0.0.2, port=8009 00:20:10.309 [2024-07-15 14:36:49.695797] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:10.309 [2024-07-15 14:36:49.695806] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:10.309 [2024-07-15 14:36:49.695814] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8009] could not start discovery connect 00:20:10.309 [2024-07-15 14:36:49.695835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.309 [2024-07-15 14:36:49.695910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.310 [2024-07-15 14:36:49.695936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.310 [2024-07-15 14:36:49.695949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.310 [2024-07-15 14:36:49.695973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.310 [2024-07-15 14:36:49.695996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.310 [2024-07-15 14:36:49.696006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.310 [2024-07-15 14:36:49.696016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.310 [2024-07-15 14:36:49.696038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.310 [2024-07-15 14:36:49.697003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.310 [2024-07-15 14:36:49.697082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.310 [2024-07-15 14:36:49.697102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.310 [2024-07-15 14:36:49.697112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.310 [2024-07-15 14:36:49.697129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.310 [2024-07-15 14:36:49.697144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.310 [2024-07-15 14:36:49.697153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.310 [2024-07-15 14:36:49.697162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.310 [2024-07-15 14:36:49.697176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.310 [2024-07-15 14:36:49.705873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.310 [2024-07-15 14:36:49.705959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.310 [2024-07-15 14:36:49.705980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.310 [2024-07-15 14:36:49.705991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.310 [2024-07-15 14:36:49.706007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.310 [2024-07-15 14:36:49.706021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.310 [2024-07-15 14:36:49.706030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.310 [2024-07-15 14:36:49.706039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.310 [2024-07-15 14:36:49.706053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.310 [2024-07-15 14:36:49.707052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.310 [2024-07-15 14:36:49.707158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.310 [2024-07-15 14:36:49.707179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.310 [2024-07-15 14:36:49.707189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.310 [2024-07-15 14:36:49.707206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.310 [2024-07-15 14:36:49.707220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.310 [2024-07-15 14:36:49.707229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.310 [2024-07-15 14:36:49.707238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.310 [2024-07-15 14:36:49.707252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.310 [2024-07-15 14:36:49.715934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.310 [2024-07-15 14:36:49.716031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.310 [2024-07-15 14:36:49.716053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.310 [2024-07-15 14:36:49.716064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.310 [2024-07-15 14:36:49.716081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.310 [2024-07-15 14:36:49.716097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.310 [2024-07-15 14:36:49.716115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.310 [2024-07-15 14:36:49.716124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.310 [2024-07-15 14:36:49.716139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.310 [2024-07-15 14:36:49.717129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.310 [2024-07-15 14:36:49.717215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.310 [2024-07-15 14:36:49.717246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.310 [2024-07-15 14:36:49.717257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.310 [2024-07-15 14:36:49.717274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.310 [2024-07-15 14:36:49.717288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.310 [2024-07-15 14:36:49.717297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.310 [2024-07-15 14:36:49.717306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.310 [2024-07-15 14:36:49.717321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.310 [2024-07-15 14:36:49.725996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.310 [2024-07-15 14:36:49.726085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.310 [2024-07-15 14:36:49.726106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.310 [2024-07-15 14:36:49.726117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.310 [2024-07-15 14:36:49.726133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.310 [2024-07-15 14:36:49.726147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.310 [2024-07-15 14:36:49.726156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.310 [2024-07-15 14:36:49.726165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.310 [2024-07-15 14:36:49.726180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.310 [2024-07-15 14:36:49.727187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.310 [2024-07-15 14:36:49.727273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.310 [2024-07-15 14:36:49.727293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.310 [2024-07-15 14:36:49.727304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.310 [2024-07-15 14:36:49.727321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.310 [2024-07-15 14:36:49.727346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.310 [2024-07-15 14:36:49.727357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.310 [2024-07-15 14:36:49.727367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.310 [2024-07-15 14:36:49.727382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.310 [2024-07-15 14:36:49.736070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.310 [2024-07-15 14:36:49.736158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.310 [2024-07-15 14:36:49.736180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.310 [2024-07-15 14:36:49.736191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.310 [2024-07-15 14:36:49.736207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.310 [2024-07-15 14:36:49.736221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.310 [2024-07-15 14:36:49.736230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.310 [2024-07-15 14:36:49.736239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.310 [2024-07-15 14:36:49.736254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.310 [2024-07-15 14:36:49.737243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.310 [2024-07-15 14:36:49.737328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.310 [2024-07-15 14:36:49.737348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.310 [2024-07-15 14:36:49.737360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.310 [2024-07-15 14:36:49.737376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.310 [2024-07-15 14:36:49.737390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.310 [2024-07-15 14:36:49.737399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.310 [2024-07-15 14:36:49.737408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.310 [2024-07-15 14:36:49.737423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.310 [2024-07-15 14:36:49.746127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.310 [2024-07-15 14:36:49.746230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.310 [2024-07-15 14:36:49.746251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.310 [2024-07-15 14:36:49.746261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.310 [2024-07-15 14:36:49.746278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.310 [2024-07-15 14:36:49.746293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.310 [2024-07-15 14:36:49.746301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.310 [2024-07-15 14:36:49.746311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.310 [2024-07-15 14:36:49.746336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.310 [2024-07-15 14:36:49.747297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.311 [2024-07-15 14:36:49.747396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.311 [2024-07-15 14:36:49.747416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.311 [2024-07-15 14:36:49.747426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.311 [2024-07-15 14:36:49.747451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.311 [2024-07-15 14:36:49.747467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.311 [2024-07-15 14:36:49.747476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.311 [2024-07-15 14:36:49.747485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.311 [2024-07-15 14:36:49.747500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.311 [2024-07-15 14:36:49.756200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.311 [2024-07-15 14:36:49.756301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.311 [2024-07-15 14:36:49.756322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.311 [2024-07-15 14:36:49.756333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.311 [2024-07-15 14:36:49.756349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.311 [2024-07-15 14:36:49.756364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.311 [2024-07-15 14:36:49.756373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.311 [2024-07-15 14:36:49.756381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.311 [2024-07-15 14:36:49.756396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.311 [2024-07-15 14:36:49.757365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.311 [2024-07-15 14:36:49.757449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.311 [2024-07-15 14:36:49.757469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.311 [2024-07-15 14:36:49.757479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.311 [2024-07-15 14:36:49.757495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.311 [2024-07-15 14:36:49.757510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.311 [2024-07-15 14:36:49.757528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.311 [2024-07-15 14:36:49.757537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.311 [2024-07-15 14:36:49.757552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.311 [2024-07-15 14:36:49.766273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.311 [2024-07-15 14:36:49.766366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.311 [2024-07-15 14:36:49.766387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.311 [2024-07-15 14:36:49.766398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.311 [2024-07-15 14:36:49.766414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.311 [2024-07-15 14:36:49.766429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.311 [2024-07-15 14:36:49.766438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.311 [2024-07-15 14:36:49.766447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.311 [2024-07-15 14:36:49.766461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.311 [2024-07-15 14:36:49.767429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.311 [2024-07-15 14:36:49.767529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.311 [2024-07-15 14:36:49.767548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.311 [2024-07-15 14:36:49.767559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.311 [2024-07-15 14:36:49.767575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.311 [2024-07-15 14:36:49.767589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.311 [2024-07-15 14:36:49.767598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.311 [2024-07-15 14:36:49.767607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.311 [2024-07-15 14:36:49.767621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.311 [2024-07-15 14:36:49.776330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.311 [2024-07-15 14:36:49.776419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.311 [2024-07-15 14:36:49.776440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.311 [2024-07-15 14:36:49.776452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.311 [2024-07-15 14:36:49.776468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.311 [2024-07-15 14:36:49.776482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.311 [2024-07-15 14:36:49.776491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.311 [2024-07-15 14:36:49.776500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.311 [2024-07-15 14:36:49.776515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.311 [2024-07-15 14:36:49.777499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.311 [2024-07-15 14:36:49.777585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.311 [2024-07-15 14:36:49.777606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.311 [2024-07-15 14:36:49.777616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.311 [2024-07-15 14:36:49.777633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.311 [2024-07-15 14:36:49.777647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.311 [2024-07-15 14:36:49.777656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.311 [2024-07-15 14:36:49.777665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.311 [2024-07-15 14:36:49.777679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.311 [2024-07-15 14:36:49.786387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.311 [2024-07-15 14:36:49.786472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.311 [2024-07-15 14:36:49.786492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.311 [2024-07-15 14:36:49.786503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.311 [2024-07-15 14:36:49.786519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.311 [2024-07-15 14:36:49.786534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.311 [2024-07-15 14:36:49.786542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.311 [2024-07-15 14:36:49.786551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.311 [2024-07-15 14:36:49.786565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.311 [2024-07-15 14:36:49.787554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.311 [2024-07-15 14:36:49.787639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.311 [2024-07-15 14:36:49.787659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.311 [2024-07-15 14:36:49.787670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.311 [2024-07-15 14:36:49.787686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.311 [2024-07-15 14:36:49.787714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.311 [2024-07-15 14:36:49.787725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.311 [2024-07-15 14:36:49.787735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.311 [2024-07-15 14:36:49.787750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.311 [2024-07-15 14:36:49.796442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.311 [2024-07-15 14:36:49.796527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.311 [2024-07-15 14:36:49.796547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.311 [2024-07-15 14:36:49.796558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.311 [2024-07-15 14:36:49.796574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.311 [2024-07-15 14:36:49.796588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.311 [2024-07-15 14:36:49.796597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.311 [2024-07-15 14:36:49.796605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.311 [2024-07-15 14:36:49.796620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.311 [2024-07-15 14:36:49.797609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.311 [2024-07-15 14:36:49.797694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.311 [2024-07-15 14:36:49.797728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.311 [2024-07-15 14:36:49.797738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.311 [2024-07-15 14:36:49.797755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.311 [2024-07-15 14:36:49.797769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.311 [2024-07-15 14:36:49.797778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.311 [2024-07-15 14:36:49.797787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.311 [2024-07-15 14:36:49.797801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.311 [2024-07-15 14:36:49.806507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.312 [2024-07-15 14:36:49.806593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.312 [2024-07-15 14:36:49.806613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.312 [2024-07-15 14:36:49.806624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.312 [2024-07-15 14:36:49.806640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.312 [2024-07-15 14:36:49.806655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.312 [2024-07-15 14:36:49.806663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.312 [2024-07-15 14:36:49.806672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.312 [2024-07-15 14:36:49.806687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.312 [2024-07-15 14:36:49.807664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.312 [2024-07-15 14:36:49.807762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.312 [2024-07-15 14:36:49.807783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.312 [2024-07-15 14:36:49.807794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.312 [2024-07-15 14:36:49.807810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.312 [2024-07-15 14:36:49.807824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.312 [2024-07-15 14:36:49.807833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.312 [2024-07-15 14:36:49.807842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.312 [2024-07-15 14:36:49.807857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.312 [2024-07-15 14:36:49.816564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.312 [2024-07-15 14:36:49.816649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.312 [2024-07-15 14:36:49.816669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.312 [2024-07-15 14:36:49.816679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.312 [2024-07-15 14:36:49.816708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.312 [2024-07-15 14:36:49.816725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.312 [2024-07-15 14:36:49.816734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.312 [2024-07-15 14:36:49.816743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.312 [2024-07-15 14:36:49.816758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.312 [2024-07-15 14:36:49.817731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.312 [2024-07-15 14:36:49.817815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.312 [2024-07-15 14:36:49.817836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.312 [2024-07-15 14:36:49.817846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.312 [2024-07-15 14:36:49.817862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.312 [2024-07-15 14:36:49.817876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.312 [2024-07-15 14:36:49.817885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.312 [2024-07-15 14:36:49.817894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.312 [2024-07-15 14:36:49.817909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.312 [2024-07-15 14:36:49.826620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.312 [2024-07-15 14:36:49.826725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.312 [2024-07-15 14:36:49.826750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.312 [2024-07-15 14:36:49.826767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.312 [2024-07-15 14:36:49.826794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.312 [2024-07-15 14:36:49.826815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.312 [2024-07-15 14:36:49.826824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.312 [2024-07-15 14:36:49.826833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.312 [2024-07-15 14:36:49.826848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.312 [2024-07-15 14:36:49.827788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.312 [2024-07-15 14:36:49.827869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.312 [2024-07-15 14:36:49.827890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.312 [2024-07-15 14:36:49.827901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.312 [2024-07-15 14:36:49.827917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.312 [2024-07-15 14:36:49.827931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.312 [2024-07-15 14:36:49.827940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.312 [2024-07-15 14:36:49.827949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.312 [2024-07-15 14:36:49.827964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.312 [2024-07-15 14:36:49.836681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.312 [2024-07-15 14:36:49.836778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.312 [2024-07-15 14:36:49.836806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.312 [2024-07-15 14:36:49.836817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.312 [2024-07-15 14:36:49.836834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.312 [2024-07-15 14:36:49.836848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.312 [2024-07-15 14:36:49.836857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.312 [2024-07-15 14:36:49.836866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.312 [2024-07-15 14:36:49.836881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.312 [2024-07-15 14:36:49.837840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.312 [2024-07-15 14:36:49.837924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.312 [2024-07-15 14:36:49.837944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.312 [2024-07-15 14:36:49.837954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.312 [2024-07-15 14:36:49.837971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.312 [2024-07-15 14:36:49.837985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.312 [2024-07-15 14:36:49.837993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.312 [2024-07-15 14:36:49.838003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.312 [2024-07-15 14:36:49.838017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.312 [2024-07-15 14:36:49.846750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.312 [2024-07-15 14:36:49.846836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.312 [2024-07-15 14:36:49.846857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.312 [2024-07-15 14:36:49.846868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.312 [2024-07-15 14:36:49.846884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.312 [2024-07-15 14:36:49.846898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.312 [2024-07-15 14:36:49.846907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.312 [2024-07-15 14:36:49.846916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.312 [2024-07-15 14:36:49.846930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.312 [2024-07-15 14:36:49.847893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.312 [2024-07-15 14:36:49.847981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.312 [2024-07-15 14:36:49.848001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.312 [2024-07-15 14:36:49.848012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.312 [2024-07-15 14:36:49.848028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.312 [2024-07-15 14:36:49.848042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.312 [2024-07-15 14:36:49.848051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.312 [2024-07-15 14:36:49.848060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.313 [2024-07-15 14:36:49.848075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.313 [2024-07-15 14:36:49.856807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.313 [2024-07-15 14:36:49.856893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.313 [2024-07-15 14:36:49.856920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.313 [2024-07-15 14:36:49.856931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.313 [2024-07-15 14:36:49.856948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.313 [2024-07-15 14:36:49.856962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.313 [2024-07-15 14:36:49.856971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.313 [2024-07-15 14:36:49.856980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.313 [2024-07-15 14:36:49.856994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.313 [2024-07-15 14:36:49.857949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.313 [2024-07-15 14:36:49.858033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.313 [2024-07-15 14:36:49.858053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.313 [2024-07-15 14:36:49.858064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.313 [2024-07-15 14:36:49.858080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.313 [2024-07-15 14:36:49.858094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.313 [2024-07-15 14:36:49.858103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.313 [2024-07-15 14:36:49.858112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.313 [2024-07-15 14:36:49.858126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.313 [2024-07-15 14:36:49.866862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.313 [2024-07-15 14:36:49.866950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.313 [2024-07-15 14:36:49.866971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.313 [2024-07-15 14:36:49.866981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.313 [2024-07-15 14:36:49.866998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.313 [2024-07-15 14:36:49.867013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.313 [2024-07-15 14:36:49.867021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.313 [2024-07-15 14:36:49.867031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.313 [2024-07-15 14:36:49.867046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.313 [2024-07-15 14:36:49.868005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.313 [2024-07-15 14:36:49.868091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.313 [2024-07-15 14:36:49.868111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.313 [2024-07-15 14:36:49.868122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.313 [2024-07-15 14:36:49.868139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.313 [2024-07-15 14:36:49.868153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.313 [2024-07-15 14:36:49.868162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.313 [2024-07-15 14:36:49.868171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.313 [2024-07-15 14:36:49.868185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.313 [2024-07-15 14:36:49.876921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.313 [2024-07-15 14:36:49.877030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.313 [2024-07-15 14:36:49.877052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.313 [2024-07-15 14:36:49.877063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.313 [2024-07-15 14:36:49.877080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.313 [2024-07-15 14:36:49.877095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.313 [2024-07-15 14:36:49.877103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.313 [2024-07-15 14:36:49.877112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.313 [2024-07-15 14:36:49.877127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.313 [2024-07-15 14:36:49.878062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.313 [2024-07-15 14:36:49.878150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.313 [2024-07-15 14:36:49.878170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.313 [2024-07-15 14:36:49.878181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.313 [2024-07-15 14:36:49.878197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.313 [2024-07-15 14:36:49.878211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.313 [2024-07-15 14:36:49.878221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.313 [2024-07-15 14:36:49.878230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.313 [2024-07-15 14:36:49.878244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.313 [2024-07-15 14:36:49.886995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.313 [2024-07-15 14:36:49.887096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.313 [2024-07-15 14:36:49.887116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.313 [2024-07-15 14:36:49.887127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.313 [2024-07-15 14:36:49.887143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.313 [2024-07-15 14:36:49.887158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.313 [2024-07-15 14:36:49.887167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.313 [2024-07-15 14:36:49.887175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.313 [2024-07-15 14:36:49.887190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.313 [2024-07-15 14:36:49.888120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.313 [2024-07-15 14:36:49.888207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.313 [2024-07-15 14:36:49.888228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.313 [2024-07-15 14:36:49.888238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.313 [2024-07-15 14:36:49.888254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.313 [2024-07-15 14:36:49.888269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.313 [2024-07-15 14:36:49.888278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.313 [2024-07-15 14:36:49.888287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.313 [2024-07-15 14:36:49.888301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.313 [2024-07-15 14:36:49.897066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.313 [2024-07-15 14:36:49.897152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.313 [2024-07-15 14:36:49.897172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.313 [2024-07-15 14:36:49.897183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.313 [2024-07-15 14:36:49.897199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.313 [2024-07-15 14:36:49.897213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.313 [2024-07-15 14:36:49.897222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.313 [2024-07-15 14:36:49.897231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.313 [2024-07-15 14:36:49.897245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.313 [2024-07-15 14:36:49.898177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.313 [2024-07-15 14:36:49.898264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.313 [2024-07-15 14:36:49.898284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.313 [2024-07-15 14:36:49.898296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.313 [2024-07-15 14:36:49.898312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.313 [2024-07-15 14:36:49.898337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.313 [2024-07-15 14:36:49.898348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.313 [2024-07-15 14:36:49.898358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.313 [2024-07-15 14:36:49.898372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.575 [2024-07-15 14:36:49.907121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.575 [2024-07-15 14:36:49.907206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.575 [2024-07-15 14:36:49.907226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.575 [2024-07-15 14:36:49.907236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.575 [2024-07-15 14:36:49.907252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.575 [2024-07-15 14:36:49.907267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.575 [2024-07-15 14:36:49.907275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.575 [2024-07-15 14:36:49.907284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.575 [2024-07-15 14:36:49.907299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.575 [2024-07-15 14:36:49.908234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.575 [2024-07-15 14:36:49.908322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.575 [2024-07-15 14:36:49.908342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.575 [2024-07-15 14:36:49.908352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.575 [2024-07-15 14:36:49.908369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.575 [2024-07-15 14:36:49.908383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.575 [2024-07-15 14:36:49.908392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.575 [2024-07-15 14:36:49.908401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.575 [2024-07-15 14:36:49.908415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.575 [2024-07-15 14:36:49.917176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.575 [2024-07-15 14:36:49.917262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.575 [2024-07-15 14:36:49.917282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.575 [2024-07-15 14:36:49.917292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.575 [2024-07-15 14:36:49.917308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.575 [2024-07-15 14:36:49.917323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.575 [2024-07-15 14:36:49.917331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.575 [2024-07-15 14:36:49.917340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.575 [2024-07-15 14:36:49.917355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.575 [2024-07-15 14:36:49.918291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.575 [2024-07-15 14:36:49.918387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.575 [2024-07-15 14:36:49.918408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.575 [2024-07-15 14:36:49.918419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.575 [2024-07-15 14:36:49.918437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.575 [2024-07-15 14:36:49.918451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.575 [2024-07-15 14:36:49.918460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.575 [2024-07-15 14:36:49.918469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.575 [2024-07-15 14:36:49.918484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.575 [2024-07-15 14:36:49.927232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.575 [2024-07-15 14:36:49.927317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.575 [2024-07-15 14:36:49.927337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.575 [2024-07-15 14:36:49.927348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.575 [2024-07-15 14:36:49.927364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.575 [2024-07-15 14:36:49.927378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.575 [2024-07-15 14:36:49.927387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.575 [2024-07-15 14:36:49.927395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.575 [2024-07-15 14:36:49.927410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.575 [2024-07-15 14:36:49.928359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.575 [2024-07-15 14:36:49.928439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.575 [2024-07-15 14:36:49.928465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.575 [2024-07-15 14:36:49.928476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.575 [2024-07-15 14:36:49.928493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.575 [2024-07-15 14:36:49.928507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.575 [2024-07-15 14:36:49.928515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.575 [2024-07-15 14:36:49.928524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.575 [2024-07-15 14:36:49.928539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.575 [2024-07-15 14:36:49.937288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.575 [2024-07-15 14:36:49.937418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.575 [2024-07-15 14:36:49.937440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.575 [2024-07-15 14:36:49.937451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.575 [2024-07-15 14:36:49.937467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.575 [2024-07-15 14:36:49.937482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.575 [2024-07-15 14:36:49.937491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.575 [2024-07-15 14:36:49.937499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.575 [2024-07-15 14:36:49.937514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.575 [2024-07-15 14:36:49.938410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.575 [2024-07-15 14:36:49.938495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.575 [2024-07-15 14:36:49.938515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.575 [2024-07-15 14:36:49.938526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.575 [2024-07-15 14:36:49.938542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.575 [2024-07-15 14:36:49.938557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.575 [2024-07-15 14:36:49.938566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.575 [2024-07-15 14:36:49.938575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.575 [2024-07-15 14:36:49.938589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.575 [2024-07-15 14:36:49.947380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.575 [2024-07-15 14:36:49.947499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.575 [2024-07-15 14:36:49.947520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.575 [2024-07-15 14:36:49.947530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.575 [2024-07-15 14:36:49.947557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.575 [2024-07-15 14:36:49.947573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.575 [2024-07-15 14:36:49.947582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.575 [2024-07-15 14:36:49.947591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.575 [2024-07-15 14:36:49.947606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.575 [2024-07-15 14:36:49.948463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.575 [2024-07-15 14:36:49.948556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.575 [2024-07-15 14:36:49.948576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.575 [2024-07-15 14:36:49.948587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.575 [2024-07-15 14:36:49.948603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.575 [2024-07-15 14:36:49.948617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.575 [2024-07-15 14:36:49.948626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.575 [2024-07-15 14:36:49.948635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.575 [2024-07-15 14:36:49.948649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.575 [2024-07-15 14:36:49.957452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.575 [2024-07-15 14:36:49.957552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.576 [2024-07-15 14:36:49.957573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.576 [2024-07-15 14:36:49.957583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.576 [2024-07-15 14:36:49.957600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.576 [2024-07-15 14:36:49.957614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.576 [2024-07-15 14:36:49.957623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.576 [2024-07-15 14:36:49.957631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.576 [2024-07-15 14:36:49.957646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.576 [2024-07-15 14:36:49.958525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.576 [2024-07-15 14:36:49.958605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.576 [2024-07-15 14:36:49.958628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.576 [2024-07-15 14:36:49.958639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.576 [2024-07-15 14:36:49.958655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.576 [2024-07-15 14:36:49.958669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.576 [2024-07-15 14:36:49.958678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.576 [2024-07-15 14:36:49.958687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.576 [2024-07-15 14:36:49.958717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.576 [2024-07-15 14:36:49.967522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.576 [2024-07-15 14:36:49.967624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.576 [2024-07-15 14:36:49.967644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.576 [2024-07-15 14:36:49.967655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.576 [2024-07-15 14:36:49.967671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.576 [2024-07-15 14:36:49.967686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.576 [2024-07-15 14:36:49.967721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.576 [2024-07-15 14:36:49.967748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.576 [2024-07-15 14:36:49.967763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.576 [2024-07-15 14:36:49.968575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.576 [2024-07-15 14:36:49.968655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.576 [2024-07-15 14:36:49.968675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.576 [2024-07-15 14:36:49.968686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.576 [2024-07-15 14:36:49.968721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.576 [2024-07-15 14:36:49.968738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.576 [2024-07-15 14:36:49.968747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.576 [2024-07-15 14:36:49.968757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.576 [2024-07-15 14:36:49.968772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.576 [2024-07-15 14:36:49.977592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.576 [2024-07-15 14:36:49.977693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.576 [2024-07-15 14:36:49.977743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.576 [2024-07-15 14:36:49.977754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.576 [2024-07-15 14:36:49.977771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.576 [2024-07-15 14:36:49.977785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.576 [2024-07-15 14:36:49.977793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.576 [2024-07-15 14:36:49.977801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.576 [2024-07-15 14:36:49.977816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.576 [2024-07-15 14:36:49.978629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.576 [2024-07-15 14:36:49.978720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.576 [2024-07-15 14:36:49.978742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.576 [2024-07-15 14:36:49.978753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.576 [2024-07-15 14:36:49.978770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.576 [2024-07-15 14:36:49.978784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.576 [2024-07-15 14:36:49.978793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.576 [2024-07-15 14:36:49.978802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.576 [2024-07-15 14:36:49.978817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.576 [2024-07-15 14:36:49.987661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.576 [2024-07-15 14:36:49.987797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.576 [2024-07-15 14:36:49.987818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.576 [2024-07-15 14:36:49.987828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.576 [2024-07-15 14:36:49.987844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.576 [2024-07-15 14:36:49.987858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.576 [2024-07-15 14:36:49.987866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.576 [2024-07-15 14:36:49.987875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.576 [2024-07-15 14:36:49.987889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.576 [2024-07-15 14:36:49.988682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.576 [2024-07-15 14:36:49.988816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.576 [2024-07-15 14:36:49.988836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.576 [2024-07-15 14:36:49.988846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.576 [2024-07-15 14:36:49.988862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.576 [2024-07-15 14:36:49.988875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.576 [2024-07-15 14:36:49.988884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.576 [2024-07-15 14:36:49.988893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.576 [2024-07-15 14:36:49.988907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.576 [2024-07-15 14:36:49.997763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.576 [2024-07-15 14:36:49.997866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.576 [2024-07-15 14:36:49.997886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.576 [2024-07-15 14:36:49.997897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.576 [2024-07-15 14:36:49.997913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.576 [2024-07-15 14:36:49.997927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.576 [2024-07-15 14:36:49.997936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.576 [2024-07-15 14:36:49.997945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.576 [2024-07-15 14:36:49.997959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.576 [2024-07-15 14:36:49.998802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.576 [2024-07-15 14:36:49.998897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.576 [2024-07-15 14:36:49.998918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.576 [2024-07-15 14:36:49.998928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.576 [2024-07-15 14:36:49.998944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.576 [2024-07-15 14:36:49.998959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.576 [2024-07-15 14:36:49.998968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.576 [2024-07-15 14:36:49.998977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.576 [2024-07-15 14:36:49.998991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.576 [2024-07-15 14:36:50.007822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.576 [2024-07-15 14:36:50.007907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.576 [2024-07-15 14:36:50.007928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.576 [2024-07-15 14:36:50.007938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.576 [2024-07-15 14:36:50.007955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.576 [2024-07-15 14:36:50.007969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.576 [2024-07-15 14:36:50.007978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.576 [2024-07-15 14:36:50.007986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.576 [2024-07-15 14:36:50.008001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.576 [2024-07-15 14:36:50.008867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.576 [2024-07-15 14:36:50.008954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.577 [2024-07-15 14:36:50.008974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.577 [2024-07-15 14:36:50.008984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.577 [2024-07-15 14:36:50.009001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.577 [2024-07-15 14:36:50.009015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.577 [2024-07-15 14:36:50.009024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.577 [2024-07-15 14:36:50.009034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.577 [2024-07-15 14:36:50.009048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.577 [2024-07-15 14:36:50.017878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.577 [2024-07-15 14:36:50.017965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.577 [2024-07-15 14:36:50.017986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.577 [2024-07-15 14:36:50.017996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.577 [2024-07-15 14:36:50.018012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.577 [2024-07-15 14:36:50.018027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.577 [2024-07-15 14:36:50.018035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.577 [2024-07-15 14:36:50.018044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.577 [2024-07-15 14:36:50.018059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.577 [2024-07-15 14:36:50.018923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.577 [2024-07-15 14:36:50.019019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.577 [2024-07-15 14:36:50.019039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.577 [2024-07-15 14:36:50.019050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.577 [2024-07-15 14:36:50.019066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.577 [2024-07-15 14:36:50.019080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.577 [2024-07-15 14:36:50.019089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.577 [2024-07-15 14:36:50.019098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.577 [2024-07-15 14:36:50.019112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.577 [2024-07-15 14:36:50.027935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.577 [2024-07-15 14:36:50.028034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.577 [2024-07-15 14:36:50.028054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.577 [2024-07-15 14:36:50.028065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.577 [2024-07-15 14:36:50.028081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.577 [2024-07-15 14:36:50.028095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.577 [2024-07-15 14:36:50.028104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.577 [2024-07-15 14:36:50.028113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.577 [2024-07-15 14:36:50.028127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.577 [2024-07-15 14:36:50.028991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.577 [2024-07-15 14:36:50.029072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.577 [2024-07-15 14:36:50.029092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.577 [2024-07-15 14:36:50.029102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.577 [2024-07-15 14:36:50.029119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.577 [2024-07-15 14:36:50.029133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.577 [2024-07-15 14:36:50.029142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.577 [2024-07-15 14:36:50.029151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.577 [2024-07-15 14:36:50.029165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.577 [2024-07-15 14:36:50.038006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.577 [2024-07-15 14:36:50.038097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.577 [2024-07-15 14:36:50.038118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.577 [2024-07-15 14:36:50.038128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.577 [2024-07-15 14:36:50.038144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.577 [2024-07-15 14:36:50.038159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.577 [2024-07-15 14:36:50.038167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.577 [2024-07-15 14:36:50.038177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.577 [2024-07-15 14:36:50.038191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.577 [2024-07-15 14:36:50.039041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.577 [2024-07-15 14:36:50.039130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.577 [2024-07-15 14:36:50.039150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.577 [2024-07-15 14:36:50.039161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.577 [2024-07-15 14:36:50.039177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.577 [2024-07-15 14:36:50.039192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.577 [2024-07-15 14:36:50.039200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.577 [2024-07-15 14:36:50.039209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.577 [2024-07-15 14:36:50.039224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.577 [2024-07-15 14:36:50.048064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.577 [2024-07-15 14:36:50.048178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.577 [2024-07-15 14:36:50.048198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.577 [2024-07-15 14:36:50.048209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.577 [2024-07-15 14:36:50.048224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.577 [2024-07-15 14:36:50.048238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.577 [2024-07-15 14:36:50.048246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.577 [2024-07-15 14:36:50.048255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.577 [2024-07-15 14:36:50.048269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.577 [2024-07-15 14:36:50.049099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.577 [2024-07-15 14:36:50.049187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.577 [2024-07-15 14:36:50.049207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.577 [2024-07-15 14:36:50.049219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.577 [2024-07-15 14:36:50.049235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.577 [2024-07-15 14:36:50.049249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.577 [2024-07-15 14:36:50.049258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.577 [2024-07-15 14:36:50.049267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.577 [2024-07-15 14:36:50.049282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.577 [2024-07-15 14:36:50.058133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.577 [2024-07-15 14:36:50.058219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.577 [2024-07-15 14:36:50.058239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.577 [2024-07-15 14:36:50.058250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.577 [2024-07-15 14:36:50.058266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.577 [2024-07-15 14:36:50.058280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.577 [2024-07-15 14:36:50.058289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.577 [2024-07-15 14:36:50.058298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.577 [2024-07-15 14:36:50.058312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.577 [2024-07-15 14:36:50.059155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.577 [2024-07-15 14:36:50.059233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.577 [2024-07-15 14:36:50.059253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.577 [2024-07-15 14:36:50.059263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.577 [2024-07-15 14:36:50.059279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.577 [2024-07-15 14:36:50.059293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.577 [2024-07-15 14:36:50.059302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.577 [2024-07-15 14:36:50.059311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.577 [2024-07-15 14:36:50.059326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.577 [2024-07-15 14:36:50.068188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.577 [2024-07-15 14:36:50.068274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.578 [2024-07-15 14:36:50.068294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.578 [2024-07-15 14:36:50.068305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.578 [2024-07-15 14:36:50.068321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.578 [2024-07-15 14:36:50.068336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.578 [2024-07-15 14:36:50.068344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.578 [2024-07-15 14:36:50.068353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.578 [2024-07-15 14:36:50.068367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.578 [2024-07-15 14:36:50.069204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.578 [2024-07-15 14:36:50.069284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.578 [2024-07-15 14:36:50.069305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.578 [2024-07-15 14:36:50.069315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.578 [2024-07-15 14:36:50.069332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.578 [2024-07-15 14:36:50.069346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.578 [2024-07-15 14:36:50.069355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.578 [2024-07-15 14:36:50.069364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.578 [2024-07-15 14:36:50.069378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.578 [2024-07-15 14:36:50.078245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.578 [2024-07-15 14:36:50.078339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.578 [2024-07-15 14:36:50.078368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.578 [2024-07-15 14:36:50.078385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.578 [2024-07-15 14:36:50.078403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.578 [2024-07-15 14:36:50.078418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.578 [2024-07-15 14:36:50.078427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.578 [2024-07-15 14:36:50.078436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.578 [2024-07-15 14:36:50.078451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.578 [2024-07-15 14:36:50.079255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.578 [2024-07-15 14:36:50.079335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.578 [2024-07-15 14:36:50.079355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.578 [2024-07-15 14:36:50.079365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.578 [2024-07-15 14:36:50.079381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.578 [2024-07-15 14:36:50.079396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.578 [2024-07-15 14:36:50.079404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.578 [2024-07-15 14:36:50.079413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.578 [2024-07-15 14:36:50.079428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.578 [2024-07-15 14:36:50.088316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.578 [2024-07-15 14:36:50.088414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.578 [2024-07-15 14:36:50.088435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.578 [2024-07-15 14:36:50.088445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.578 [2024-07-15 14:36:50.088461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.578 [2024-07-15 14:36:50.088476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.578 [2024-07-15 14:36:50.088485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.578 [2024-07-15 14:36:50.088494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.578 [2024-07-15 14:36:50.088508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.578 [2024-07-15 14:36:50.089305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.578 [2024-07-15 14:36:50.089384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.578 [2024-07-15 14:36:50.089404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.578 [2024-07-15 14:36:50.089415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.578 [2024-07-15 14:36:50.089430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.578 [2024-07-15 14:36:50.089445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.578 [2024-07-15 14:36:50.089453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.578 [2024-07-15 14:36:50.089463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.578 [2024-07-15 14:36:50.089477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.578 [2024-07-15 14:36:50.098385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.578 [2024-07-15 14:36:50.098471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.578 [2024-07-15 14:36:50.098491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.578 [2024-07-15 14:36:50.098502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.578 [2024-07-15 14:36:50.098518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.578 [2024-07-15 14:36:50.098533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.578 [2024-07-15 14:36:50.098541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.578 [2024-07-15 14:36:50.098550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.578 [2024-07-15 14:36:50.098565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.578 [2024-07-15 14:36:50.099355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.578 [2024-07-15 14:36:50.099433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.578 [2024-07-15 14:36:50.099453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.578 [2024-07-15 14:36:50.099463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.578 [2024-07-15 14:36:50.099479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.578 [2024-07-15 14:36:50.099493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.578 [2024-07-15 14:36:50.099502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.578 [2024-07-15 14:36:50.099511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.578 [2024-07-15 14:36:50.099526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.578 [2024-07-15 14:36:50.108442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.578 [2024-07-15 14:36:50.108529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.578 [2024-07-15 14:36:50.108550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.578 [2024-07-15 14:36:50.108560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.578 [2024-07-15 14:36:50.108577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.578 [2024-07-15 14:36:50.108591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.578 [2024-07-15 14:36:50.108600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.578 [2024-07-15 14:36:50.108609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.578 [2024-07-15 14:36:50.108623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.578 [2024-07-15 14:36:50.109403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.578 [2024-07-15 14:36:50.109485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.578 [2024-07-15 14:36:50.109505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.578 [2024-07-15 14:36:50.109516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.578 [2024-07-15 14:36:50.109532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.578 [2024-07-15 14:36:50.109546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.578 [2024-07-15 14:36:50.109554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.578 [2024-07-15 14:36:50.109563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.578 [2024-07-15 14:36:50.109577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.578 [2024-07-15 14:36:50.118500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.578 [2024-07-15 14:36:50.118586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.578 [2024-07-15 14:36:50.118606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.578 [2024-07-15 14:36:50.118617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.578 [2024-07-15 14:36:50.118633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.578 [2024-07-15 14:36:50.118647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.578 [2024-07-15 14:36:50.118656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.578 [2024-07-15 14:36:50.118665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.578 [2024-07-15 14:36:50.118679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.578 [2024-07-15 14:36:50.119456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.578 [2024-07-15 14:36:50.119541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.579 [2024-07-15 14:36:50.119561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.579 [2024-07-15 14:36:50.119571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.579 [2024-07-15 14:36:50.119587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.579 [2024-07-15 14:36:50.119611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.579 [2024-07-15 14:36:50.119621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.579 [2024-07-15 14:36:50.119631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.579 [2024-07-15 14:36:50.119646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.579 [2024-07-15 14:36:50.128557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.579 [2024-07-15 14:36:50.128656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.579 [2024-07-15 14:36:50.128677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.579 [2024-07-15 14:36:50.128688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.579 [2024-07-15 14:36:50.128716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.579 [2024-07-15 14:36:50.128732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.579 [2024-07-15 14:36:50.128741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.579 [2024-07-15 14:36:50.128750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.579 [2024-07-15 14:36:50.128765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.579 [2024-07-15 14:36:50.129513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.579 [2024-07-15 14:36:50.129593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.579 [2024-07-15 14:36:50.129613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.579 [2024-07-15 14:36:50.129623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.579 [2024-07-15 14:36:50.129639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.579 [2024-07-15 14:36:50.129654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.579 [2024-07-15 14:36:50.129662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.579 [2024-07-15 14:36:50.129671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.579 [2024-07-15 14:36:50.129685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.579 [2024-07-15 14:36:50.138628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.579 [2024-07-15 14:36:50.138728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.579 [2024-07-15 14:36:50.138764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.579 [2024-07-15 14:36:50.138774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.579 [2024-07-15 14:36:50.138791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.579 [2024-07-15 14:36:50.138805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.579 [2024-07-15 14:36:50.138813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.579 [2024-07-15 14:36:50.138822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.579 [2024-07-15 14:36:50.138836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.579 [2024-07-15 14:36:50.139564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.579 [2024-07-15 14:36:50.139643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.579 [2024-07-15 14:36:50.139663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.579 [2024-07-15 14:36:50.139673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.579 [2024-07-15 14:36:50.139713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.579 [2024-07-15 14:36:50.139731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.579 [2024-07-15 14:36:50.139740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.579 [2024-07-15 14:36:50.139749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.579 [2024-07-15 14:36:50.139764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.579 [2024-07-15 14:36:50.148694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.579 [2024-07-15 14:36:50.148840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.579 [2024-07-15 14:36:50.148862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.579 [2024-07-15 14:36:50.148873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.579 [2024-07-15 14:36:50.148889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.579 [2024-07-15 14:36:50.148903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.579 [2024-07-15 14:36:50.148913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.579 [2024-07-15 14:36:50.148922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.579 [2024-07-15 14:36:50.148936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.579 [2024-07-15 14:36:50.149613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.579 [2024-07-15 14:36:50.149692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.579 [2024-07-15 14:36:50.149724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.579 [2024-07-15 14:36:50.149735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.579 [2024-07-15 14:36:50.149751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.579 [2024-07-15 14:36:50.149765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.579 [2024-07-15 14:36:50.149774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.579 [2024-07-15 14:36:50.149783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.579 [2024-07-15 14:36:50.149798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.579 [2024-07-15 14:36:50.158790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.579 [2024-07-15 14:36:50.158884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.579 [2024-07-15 14:36:50.158906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.579 [2024-07-15 14:36:50.158916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.579 [2024-07-15 14:36:50.158933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.579 [2024-07-15 14:36:50.158947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.579 [2024-07-15 14:36:50.158956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.579 [2024-07-15 14:36:50.158965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.579 [2024-07-15 14:36:50.158979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.579 [2024-07-15 14:36:50.159662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.579 [2024-07-15 14:36:50.159753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.579 [2024-07-15 14:36:50.159774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.579 [2024-07-15 14:36:50.159784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.579 [2024-07-15 14:36:50.159800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.579 [2024-07-15 14:36:50.159814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.579 [2024-07-15 14:36:50.159823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.579 [2024-07-15 14:36:50.159831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.579 [2024-07-15 14:36:50.159846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.876 [2024-07-15 14:36:50.168857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.876 [2024-07-15 14:36:50.168961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.876 [2024-07-15 14:36:50.168983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.876 [2024-07-15 14:36:50.168995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.876 [2024-07-15 14:36:50.169012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.876 [2024-07-15 14:36:50.169027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.876 [2024-07-15 14:36:50.169036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.876 [2024-07-15 14:36:50.169045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.876 [2024-07-15 14:36:50.169060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.876 [2024-07-15 14:36:50.169727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.876 [2024-07-15 14:36:50.169817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.876 [2024-07-15 14:36:50.169839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.876 [2024-07-15 14:36:50.169849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.876 [2024-07-15 14:36:50.169866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.876 [2024-07-15 14:36:50.169880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.876 [2024-07-15 14:36:50.169889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.876 [2024-07-15 14:36:50.169898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.876 [2024-07-15 14:36:50.169913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.876 [2024-07-15 14:36:50.178923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.876 [2024-07-15 14:36:50.179011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.876 [2024-07-15 14:36:50.179032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.876 [2024-07-15 14:36:50.179043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.876 [2024-07-15 14:36:50.179059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.876 [2024-07-15 14:36:50.179073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.876 [2024-07-15 14:36:50.179082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.876 [2024-07-15 14:36:50.179091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.876 [2024-07-15 14:36:50.179105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.876 [2024-07-15 14:36:50.179786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.876 [2024-07-15 14:36:50.179866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.876 [2024-07-15 14:36:50.179890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.876 [2024-07-15 14:36:50.179906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.876 [2024-07-15 14:36:50.179926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.876 [2024-07-15 14:36:50.179940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.876 [2024-07-15 14:36:50.179949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.876 [2024-07-15 14:36:50.179958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.876 [2024-07-15 14:36:50.179973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.876 [2024-07-15 14:36:50.188981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.876 [2024-07-15 14:36:50.189068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.876 [2024-07-15 14:36:50.189089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.876 [2024-07-15 14:36:50.189099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.876 [2024-07-15 14:36:50.189115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.876 [2024-07-15 14:36:50.189129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.876 [2024-07-15 14:36:50.189138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.876 [2024-07-15 14:36:50.189147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.876 [2024-07-15 14:36:50.189161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.876 [2024-07-15 14:36:50.189837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.876 [2024-07-15 14:36:50.189916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.876 [2024-07-15 14:36:50.189937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.876 [2024-07-15 14:36:50.189947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.876 [2024-07-15 14:36:50.189963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.876 [2024-07-15 14:36:50.189977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.876 [2024-07-15 14:36:50.189986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.876 [2024-07-15 14:36:50.189995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.876 [2024-07-15 14:36:50.190009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.876 [2024-07-15 14:36:50.199038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.876 [2024-07-15 14:36:50.199125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.876 [2024-07-15 14:36:50.199146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.876 [2024-07-15 14:36:50.199157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.876 [2024-07-15 14:36:50.199173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.876 [2024-07-15 14:36:50.199188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.876 [2024-07-15 14:36:50.199196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.876 [2024-07-15 14:36:50.199205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.876 [2024-07-15 14:36:50.199220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.876 [2024-07-15 14:36:50.199892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.876 [2024-07-15 14:36:50.199974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.876 [2024-07-15 14:36:50.199994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.876 [2024-07-15 14:36:50.200007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.876 [2024-07-15 14:36:50.200031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.876 [2024-07-15 14:36:50.200047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.876 [2024-07-15 14:36:50.200056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.876 [2024-07-15 14:36:50.200065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.876 [2024-07-15 14:36:50.200080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.876 [2024-07-15 14:36:50.209096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.876 [2024-07-15 14:36:50.209182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.876 [2024-07-15 14:36:50.209203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.876 [2024-07-15 14:36:50.209213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.876 [2024-07-15 14:36:50.209229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.876 [2024-07-15 14:36:50.209243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.876 [2024-07-15 14:36:50.209252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.876 [2024-07-15 14:36:50.209261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.876 [2024-07-15 14:36:50.209276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.876 [2024-07-15 14:36:50.209946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.876 [2024-07-15 14:36:50.210024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.876 [2024-07-15 14:36:50.210045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.876 [2024-07-15 14:36:50.210055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.876 [2024-07-15 14:36:50.210071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.876 [2024-07-15 14:36:50.210085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.876 [2024-07-15 14:36:50.210094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.876 [2024-07-15 14:36:50.210103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.876 [2024-07-15 14:36:50.210118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.876 [2024-07-15 14:36:50.219153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.876 [2024-07-15 14:36:50.219240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.876 [2024-07-15 14:36:50.219261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.876 [2024-07-15 14:36:50.219271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.876 [2024-07-15 14:36:50.219288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.876 [2024-07-15 14:36:50.219302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.876 [2024-07-15 14:36:50.219310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.876 [2024-07-15 14:36:50.219319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.876 [2024-07-15 14:36:50.219334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.876 [2024-07-15 14:36:50.219994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.876 [2024-07-15 14:36:50.220085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.876 [2024-07-15 14:36:50.220107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.877 [2024-07-15 14:36:50.220119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.220135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.220149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.220163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.220176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.877 [2024-07-15 14:36:50.220191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.229209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.877 [2024-07-15 14:36:50.229294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.229315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.877 [2024-07-15 14:36:50.229326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.229342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.229356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.229365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.229374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.877 [2024-07-15 14:36:50.229388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.230055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.877 [2024-07-15 14:36:50.230135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.230155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.877 [2024-07-15 14:36:50.230165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.230181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.230195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.230204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.230213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.877 [2024-07-15 14:36:50.230227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.239267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.877 [2024-07-15 14:36:50.239357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.239379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.877 [2024-07-15 14:36:50.239389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.239406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.239420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.239429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.239437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.877 [2024-07-15 14:36:50.239452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.240106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.877 [2024-07-15 14:36:50.240188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.240209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.877 [2024-07-15 14:36:50.240219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.240235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.240250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.240259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.240268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.877 [2024-07-15 14:36:50.240283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.249326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.877 [2024-07-15 14:36:50.249441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.249463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.877 [2024-07-15 14:36:50.249474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.249491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.249506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.249515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.249524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.877 [2024-07-15 14:36:50.249539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.250156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.877 [2024-07-15 14:36:50.250243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.250268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.877 [2024-07-15 14:36:50.250279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.250296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.250311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.250319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.250341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.877 [2024-07-15 14:36:50.250357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.259401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.877 [2024-07-15 14:36:50.259488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.259509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.877 [2024-07-15 14:36:50.259520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.259536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.259551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.259559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.259568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.877 [2024-07-15 14:36:50.259583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.260212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.877 [2024-07-15 14:36:50.260295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.260315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.877 [2024-07-15 14:36:50.260328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.260352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.260369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.260377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.260386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.877 [2024-07-15 14:36:50.260400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.269457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.877 [2024-07-15 14:36:50.269544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.269565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.877 [2024-07-15 14:36:50.269576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.269592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.269606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.269615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.269624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.877 [2024-07-15 14:36:50.269638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.270269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.877 [2024-07-15 14:36:50.270369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.270392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.877 [2024-07-15 14:36:50.270402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.270424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.270446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.270455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.270464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.877 [2024-07-15 14:36:50.270479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.279514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.877 [2024-07-15 14:36:50.279604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.279625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.877 [2024-07-15 14:36:50.279637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.279653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.279678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.279688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.279711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.877 [2024-07-15 14:36:50.279728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.280330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.877 [2024-07-15 14:36:50.280407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.280432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.877 [2024-07-15 14:36:50.280449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.280467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.280481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.280489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.280498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.877 [2024-07-15 14:36:50.280513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.289570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.877 [2024-07-15 14:36:50.289655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.289676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.877 [2024-07-15 14:36:50.289687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.289715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.289738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.289747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.289756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.877 [2024-07-15 14:36:50.289771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.290378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.877 [2024-07-15 14:36:50.290457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.290478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.877 [2024-07-15 14:36:50.290488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.290504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.290519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.290528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.877 [2024-07-15 14:36:50.290537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.877 [2024-07-15 14:36:50.290551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.877 [2024-07-15 14:36:50.299627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.877 [2024-07-15 14:36:50.299724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.877 [2024-07-15 14:36:50.299745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.877 [2024-07-15 14:36:50.299756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.877 [2024-07-15 14:36:50.299782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.877 [2024-07-15 14:36:50.299798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.877 [2024-07-15 14:36:50.299807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.299816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.878 [2024-07-15 14:36:50.299831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.300428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.878 [2024-07-15 14:36:50.300527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.300556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.878 [2024-07-15 14:36:50.300571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.300588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.300602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.300611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.300620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.878 [2024-07-15 14:36:50.300635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.309684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.878 [2024-07-15 14:36:50.309777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.309798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.878 [2024-07-15 14:36:50.309808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.309825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.309840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.309848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.309857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.878 [2024-07-15 14:36:50.309872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.310479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.878 [2024-07-15 14:36:50.310573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.310610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.878 [2024-07-15 14:36:50.310622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.310639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.310653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.310662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.310671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.878 [2024-07-15 14:36:50.310688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.319748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.878 [2024-07-15 14:36:50.319833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.319854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.878 [2024-07-15 14:36:50.319865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.319881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.319896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.319905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.319913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.878 [2024-07-15 14:36:50.319928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.320533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.878 [2024-07-15 14:36:50.320610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.320631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.878 [2024-07-15 14:36:50.320647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.320668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.320683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.320692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.320716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.878 [2024-07-15 14:36:50.320732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.329804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.878 [2024-07-15 14:36:50.329887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.329908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.878 [2024-07-15 14:36:50.329919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.329936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.329950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.329959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.329967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.878 [2024-07-15 14:36:50.329982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.330587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.878 [2024-07-15 14:36:50.330678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.330711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.878 [2024-07-15 14:36:50.330723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.330740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.330759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.330775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.330785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.878 [2024-07-15 14:36:50.330800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.339859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.878 [2024-07-15 14:36:50.339949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.339970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.878 [2024-07-15 14:36:50.339981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.339997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.340012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.340021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.340030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.878 [2024-07-15 14:36:50.340044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.340643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.878 [2024-07-15 14:36:50.340742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.340764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.878 [2024-07-15 14:36:50.340775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.340792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.340806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.340815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.340824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.878 [2024-07-15 14:36:50.340838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.349918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.878 [2024-07-15 14:36:50.350004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.350024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.878 [2024-07-15 14:36:50.350034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.350051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.350065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.350074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.350083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.878 [2024-07-15 14:36:50.350098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.350693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.878 [2024-07-15 14:36:50.350783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.350803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.878 [2024-07-15 14:36:50.350814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.350831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.350845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.350854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.350863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.878 [2024-07-15 14:36:50.350877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.359976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.878 [2024-07-15 14:36:50.360072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.360093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.878 [2024-07-15 14:36:50.360104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.360121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.360135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.360144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.360153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.878 [2024-07-15 14:36:50.360168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.360754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.878 [2024-07-15 14:36:50.360832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.360852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.878 [2024-07-15 14:36:50.360862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.360878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.360892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.360901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.360910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.878 [2024-07-15 14:36:50.360925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.370039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.878 [2024-07-15 14:36:50.370139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.370160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.878 [2024-07-15 14:36:50.370170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.370187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.370202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.370210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.370219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.878 [2024-07-15 14:36:50.370234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.370802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.878 [2024-07-15 14:36:50.370891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.370912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.878 [2024-07-15 14:36:50.370922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.878 [2024-07-15 14:36:50.370940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.878 [2024-07-15 14:36:50.370954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.878 [2024-07-15 14:36:50.370963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.878 [2024-07-15 14:36:50.370972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.878 [2024-07-15 14:36:50.370988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.878 [2024-07-15 14:36:50.380102] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.878 [2024-07-15 14:36:50.380189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.878 [2024-07-15 14:36:50.380209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.879 [2024-07-15 14:36:50.380220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.380238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.380253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.879 [2024-07-15 14:36:50.380261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.879 [2024-07-15 14:36:50.380270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.879 [2024-07-15 14:36:50.380285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.879 [2024-07-15 14:36:50.380855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.879 [2024-07-15 14:36:50.380943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.879 [2024-07-15 14:36:50.380965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.879 [2024-07-15 14:36:50.380976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.380993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.381008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.879 [2024-07-15 14:36:50.381016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.879 [2024-07-15 14:36:50.381025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.879 [2024-07-15 14:36:50.381040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.879 [2024-07-15 14:36:50.390159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.879 [2024-07-15 14:36:50.390245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.879 [2024-07-15 14:36:50.390266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.879 [2024-07-15 14:36:50.390276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.390292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.390306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.879 [2024-07-15 14:36:50.390315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.879 [2024-07-15 14:36:50.390333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.879 [2024-07-15 14:36:50.390349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.879 [2024-07-15 14:36:50.390909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.879 [2024-07-15 14:36:50.390993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.879 [2024-07-15 14:36:50.391013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.879 [2024-07-15 14:36:50.391023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.391040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.391054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.879 [2024-07-15 14:36:50.391063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.879 [2024-07-15 14:36:50.391073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.879 [2024-07-15 14:36:50.391088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.879 [2024-07-15 14:36:50.400216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.879 [2024-07-15 14:36:50.400304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.879 [2024-07-15 14:36:50.400324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.879 [2024-07-15 14:36:50.400335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.400352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.400367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.879 [2024-07-15 14:36:50.400376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.879 [2024-07-15 14:36:50.400384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.879 [2024-07-15 14:36:50.400399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.879 [2024-07-15 14:36:50.400963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.879 [2024-07-15 14:36:50.401060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.879 [2024-07-15 14:36:50.401082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.879 [2024-07-15 14:36:50.401093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.401110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.401131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.879 [2024-07-15 14:36:50.401147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.879 [2024-07-15 14:36:50.401157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.879 [2024-07-15 14:36:50.401172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.879 [2024-07-15 14:36:50.410277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.879 [2024-07-15 14:36:50.410387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.879 [2024-07-15 14:36:50.410409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.879 [2024-07-15 14:36:50.410420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.410437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.410452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.879 [2024-07-15 14:36:50.410461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.879 [2024-07-15 14:36:50.410470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.879 [2024-07-15 14:36:50.410485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.879 [2024-07-15 14:36:50.411017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.879 [2024-07-15 14:36:50.411097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.879 [2024-07-15 14:36:50.411117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.879 [2024-07-15 14:36:50.411127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.411143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.411157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.879 [2024-07-15 14:36:50.411167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.879 [2024-07-15 14:36:50.411180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.879 [2024-07-15 14:36:50.411202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.879 [2024-07-15 14:36:50.420342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.879 [2024-07-15 14:36:50.420448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.879 [2024-07-15 14:36:50.420468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.879 [2024-07-15 14:36:50.420480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.420496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.420510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.879 [2024-07-15 14:36:50.420519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.879 [2024-07-15 14:36:50.420528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.879 [2024-07-15 14:36:50.420543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.879 [2024-07-15 14:36:50.421067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.879 [2024-07-15 14:36:50.421160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.879 [2024-07-15 14:36:50.421188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.879 [2024-07-15 14:36:50.421200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.421219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.421237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.879 [2024-07-15 14:36:50.421252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.879 [2024-07-15 14:36:50.421264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.879 [2024-07-15 14:36:50.421280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.879 [2024-07-15 14:36:50.430422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.879 [2024-07-15 14:36:50.430548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.879 [2024-07-15 14:36:50.430570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.879 [2024-07-15 14:36:50.430581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.430600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.430615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.879 [2024-07-15 14:36:50.430624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.879 [2024-07-15 14:36:50.430634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.879 [2024-07-15 14:36:50.430649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.879 [2024-07-15 14:36:50.431121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.879 [2024-07-15 14:36:50.431219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.879 [2024-07-15 14:36:50.431247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.879 [2024-07-15 14:36:50.431258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.431276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.431291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.879 [2024-07-15 14:36:50.431305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.879 [2024-07-15 14:36:50.431319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.879 [2024-07-15 14:36:50.431338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.879 [2024-07-15 14:36:50.440503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.879 [2024-07-15 14:36:50.440611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.879 [2024-07-15 14:36:50.440632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.879 [2024-07-15 14:36:50.440643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.440661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.440675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.879 [2024-07-15 14:36:50.440684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.879 [2024-07-15 14:36:50.440693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.879 [2024-07-15 14:36:50.440725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.879 [2024-07-15 14:36:50.441179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.879 [2024-07-15 14:36:50.441271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.879 [2024-07-15 14:36:50.441293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.879 [2024-07-15 14:36:50.441304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.879 [2024-07-15 14:36:50.441323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.879 [2024-07-15 14:36:50.441345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.880 [2024-07-15 14:36:50.441355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.880 [2024-07-15 14:36:50.441364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.880 [2024-07-15 14:36:50.441381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.880 [2024-07-15 14:36:50.450578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.880 [2024-07-15 14:36:50.450717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.880 [2024-07-15 14:36:50.450742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.880 [2024-07-15 14:36:50.450754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.880 [2024-07-15 14:36:50.450773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.880 [2024-07-15 14:36:50.450788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.880 [2024-07-15 14:36:50.450797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.880 [2024-07-15 14:36:50.450806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.880 [2024-07-15 14:36:50.450822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.880 [2024-07-15 14:36:50.451240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.880 [2024-07-15 14:36:50.451332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.880 [2024-07-15 14:36:50.451354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.880 [2024-07-15 14:36:50.451365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.880 [2024-07-15 14:36:50.451382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.880 [2024-07-15 14:36:50.451400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.880 [2024-07-15 14:36:50.451415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.880 [2024-07-15 14:36:50.451429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.880 [2024-07-15 14:36:50.451446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.880 [2024-07-15 14:36:50.460657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.880 [2024-07-15 14:36:50.460788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.880 [2024-07-15 14:36:50.460811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:10.880 [2024-07-15 14:36:50.460822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:10.880 [2024-07-15 14:36:50.460839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:10.880 [2024-07-15 14:36:50.460854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:10.880 [2024-07-15 14:36:50.460862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:10.880 [2024-07-15 14:36:50.460871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:10.880 [2024-07-15 14:36:50.460886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.880 [2024-07-15 14:36:50.461292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:10.880 [2024-07-15 14:36:50.461396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.880 [2024-07-15 14:36:50.461428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:10.880 [2024-07-15 14:36:50.461439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:10.880 [2024-07-15 14:36:50.461458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:10.880 [2024-07-15 14:36:50.461481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:10.880 [2024-07-15 14:36:50.461496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:10.880 [2024-07-15 14:36:50.461505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:10.880 [2024-07-15 14:36:50.461526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.141 [2024-07-15 14:36:50.470751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.141 [2024-07-15 14:36:50.470857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.141 [2024-07-15 14:36:50.470879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.141 [2024-07-15 14:36:50.470890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.141 [2024-07-15 14:36:50.470907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.141 [2024-07-15 14:36:50.470922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.141 [2024-07-15 14:36:50.470931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.141 [2024-07-15 14:36:50.470940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.141 [2024-07-15 14:36:50.470954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.141 [2024-07-15 14:36:50.471350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.141 [2024-07-15 14:36:50.471456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.141 [2024-07-15 14:36:50.471487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.141 [2024-07-15 14:36:50.471499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.141 [2024-07-15 14:36:50.471516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.141 [2024-07-15 14:36:50.471535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.141 [2024-07-15 14:36:50.471550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.141 [2024-07-15 14:36:50.471565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.141 [2024-07-15 14:36:50.471581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.141 [2024-07-15 14:36:50.480821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.141 [2024-07-15 14:36:50.480925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.141 [2024-07-15 14:36:50.480949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.141 [2024-07-15 14:36:50.480960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.141 [2024-07-15 14:36:50.480978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.141 [2024-07-15 14:36:50.480992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.141 [2024-07-15 14:36:50.481001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.141 [2024-07-15 14:36:50.481010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.141 [2024-07-15 14:36:50.481025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.141 [2024-07-15 14:36:50.481407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.141 [2024-07-15 14:36:50.481497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.141 [2024-07-15 14:36:50.481519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.141 [2024-07-15 14:36:50.481529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.141 [2024-07-15 14:36:50.481546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.141 [2024-07-15 14:36:50.481560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.141 [2024-07-15 14:36:50.481569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.141 [2024-07-15 14:36:50.481580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.141 [2024-07-15 14:36:50.481602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.141 [2024-07-15 14:36:50.490883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.141 [2024-07-15 14:36:50.490980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.141 [2024-07-15 14:36:50.491001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.141 [2024-07-15 14:36:50.491011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.141 [2024-07-15 14:36:50.491029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.141 [2024-07-15 14:36:50.491044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.141 [2024-07-15 14:36:50.491053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.141 [2024-07-15 14:36:50.491062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.141 [2024-07-15 14:36:50.491076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.141 [2024-07-15 14:36:50.491462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.141 [2024-07-15 14:36:50.491555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.141 [2024-07-15 14:36:50.491590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.141 [2024-07-15 14:36:50.491603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.141 [2024-07-15 14:36:50.491620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.141 [2024-07-15 14:36:50.491634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.142 [2024-07-15 14:36:50.491643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.142 [2024-07-15 14:36:50.491652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.142 [2024-07-15 14:36:50.491667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.142 [2024-07-15 14:36:50.500947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.142 [2024-07-15 14:36:50.501043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.142 [2024-07-15 14:36:50.501063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.142 [2024-07-15 14:36:50.501075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.142 [2024-07-15 14:36:50.501092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.142 [2024-07-15 14:36:50.501107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.142 [2024-07-15 14:36:50.501116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.142 [2024-07-15 14:36:50.501126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.142 [2024-07-15 14:36:50.501140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.142 [2024-07-15 14:36:50.501519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.142 [2024-07-15 14:36:50.501605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.142 [2024-07-15 14:36:50.501627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.142 [2024-07-15 14:36:50.501638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.142 [2024-07-15 14:36:50.501654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.142 [2024-07-15 14:36:50.501668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.142 [2024-07-15 14:36:50.501678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.142 [2024-07-15 14:36:50.501690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.142 [2024-07-15 14:36:50.501732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.142 [2024-07-15 14:36:50.511012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.142 [2024-07-15 14:36:50.511117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.142 [2024-07-15 14:36:50.511138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.142 [2024-07-15 14:36:50.511149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.142 [2024-07-15 14:36:50.511166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.142 [2024-07-15 14:36:50.511180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.142 [2024-07-15 14:36:50.511189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.142 [2024-07-15 14:36:50.511198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.142 [2024-07-15 14:36:50.511213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.142 [2024-07-15 14:36:50.511573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.142 [2024-07-15 14:36:50.511654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.142 [2024-07-15 14:36:50.511682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.142 [2024-07-15 14:36:50.511693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.142 [2024-07-15 14:36:50.511731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.142 [2024-07-15 14:36:50.511751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.142 [2024-07-15 14:36:50.511761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.142 [2024-07-15 14:36:50.511774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.142 [2024-07-15 14:36:50.511809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.142 [2024-07-15 14:36:50.521078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.142 [2024-07-15 14:36:50.521164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.142 [2024-07-15 14:36:50.521185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.142 [2024-07-15 14:36:50.521195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.142 [2024-07-15 14:36:50.521211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.142 [2024-07-15 14:36:50.521226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.142 [2024-07-15 14:36:50.521234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.142 [2024-07-15 14:36:50.521243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.142 [2024-07-15 14:36:50.521257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.142 [2024-07-15 14:36:50.521616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.142 [2024-07-15 14:36:50.521690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.142 [2024-07-15 14:36:50.521724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.142 [2024-07-15 14:36:50.521736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.142 [2024-07-15 14:36:50.521759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.142 [2024-07-15 14:36:50.521779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.142 [2024-07-15 14:36:50.521789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.142 [2024-07-15 14:36:50.521802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.142 [2024-07-15 14:36:50.521825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.142 [2024-07-15 14:36:50.531134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.142 [2024-07-15 14:36:50.531220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.142 [2024-07-15 14:36:50.531241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.142 [2024-07-15 14:36:50.531252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.142 [2024-07-15 14:36:50.531268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.142 [2024-07-15 14:36:50.531282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.142 [2024-07-15 14:36:50.531290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.142 [2024-07-15 14:36:50.531299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.142 [2024-07-15 14:36:50.531314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.142 [2024-07-15 14:36:50.531670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.142 [2024-07-15 14:36:50.531753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.142 [2024-07-15 14:36:50.531774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.142 [2024-07-15 14:36:50.531791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.142 [2024-07-15 14:36:50.531813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.142 [2024-07-15 14:36:50.531840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.142 [2024-07-15 14:36:50.531854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.142 [2024-07-15 14:36:50.531867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.142 [2024-07-15 14:36:50.531889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.142 [2024-07-15 14:36:50.541191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.142 [2024-07-15 14:36:50.541284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.142 [2024-07-15 14:36:50.541305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.142 [2024-07-15 14:36:50.541316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.142 [2024-07-15 14:36:50.541332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.142 [2024-07-15 14:36:50.541347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.142 [2024-07-15 14:36:50.541355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.142 [2024-07-15 14:36:50.541365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.142 [2024-07-15 14:36:50.541379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.142 [2024-07-15 14:36:50.541723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.142 [2024-07-15 14:36:50.541804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.142 [2024-07-15 14:36:50.541826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.142 [2024-07-15 14:36:50.541836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.142 [2024-07-15 14:36:50.541853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.142 [2024-07-15 14:36:50.541874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.142 [2024-07-15 14:36:50.541889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.142 [2024-07-15 14:36:50.541899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.142 [2024-07-15 14:36:50.541913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.142 [2024-07-15 14:36:50.551253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.142 [2024-07-15 14:36:50.551359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.142 [2024-07-15 14:36:50.551381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.142 [2024-07-15 14:36:50.551392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.142 [2024-07-15 14:36:50.551409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.142 [2024-07-15 14:36:50.551423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.143 [2024-07-15 14:36:50.551432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.143 [2024-07-15 14:36:50.551441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.143 [2024-07-15 14:36:50.551456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.143 [2024-07-15 14:36:50.551776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.143 [2024-07-15 14:36:50.551859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.143 [2024-07-15 14:36:50.551881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.143 [2024-07-15 14:36:50.551896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.143 [2024-07-15 14:36:50.551931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.143 [2024-07-15 14:36:50.551954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.143 [2024-07-15 14:36:50.551965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.143 [2024-07-15 14:36:50.551974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.143 [2024-07-15 14:36:50.551991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.143 [2024-07-15 14:36:50.561328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.143 [2024-07-15 14:36:50.561481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.143 [2024-07-15 14:36:50.561504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.143 [2024-07-15 14:36:50.561516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.143 [2024-07-15 14:36:50.561536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.143 [2024-07-15 14:36:50.561562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.143 [2024-07-15 14:36:50.561570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.143 [2024-07-15 14:36:50.561580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.143 [2024-07-15 14:36:50.561596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.143 [2024-07-15 14:36:50.561829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.143 [2024-07-15 14:36:50.561907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.143 [2024-07-15 14:36:50.561932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.143 [2024-07-15 14:36:50.561950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.143 [2024-07-15 14:36:50.561974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.143 [2024-07-15 14:36:50.561992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.143 [2024-07-15 14:36:50.562001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.143 [2024-07-15 14:36:50.562011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.143 [2024-07-15 14:36:50.562025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.143 [2024-07-15 14:36:50.571421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.143 [2024-07-15 14:36:50.571573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.143 [2024-07-15 14:36:50.571597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.143 [2024-07-15 14:36:50.571609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.143 [2024-07-15 14:36:50.571628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.143 [2024-07-15 14:36:50.571644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.143 [2024-07-15 14:36:50.571653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.143 [2024-07-15 14:36:50.571662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.143 [2024-07-15 14:36:50.571678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.143 [2024-07-15 14:36:50.571874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.143 [2024-07-15 14:36:50.571954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.143 [2024-07-15 14:36:50.571976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.143 [2024-07-15 14:36:50.571986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.143 [2024-07-15 14:36:50.572003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.143 [2024-07-15 14:36:50.572023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.143 [2024-07-15 14:36:50.572035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.143 [2024-07-15 14:36:50.572050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.143 [2024-07-15 14:36:50.572072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.143 [2024-07-15 14:36:50.581509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.143 [2024-07-15 14:36:50.581640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.143 [2024-07-15 14:36:50.581664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.143 [2024-07-15 14:36:50.581675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.143 [2024-07-15 14:36:50.581692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.143 [2024-07-15 14:36:50.581721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.143 [2024-07-15 14:36:50.581731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.143 [2024-07-15 14:36:50.581740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.143 [2024-07-15 14:36:50.581756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.143 [2024-07-15 14:36:50.581918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.143 [2024-07-15 14:36:50.581989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.143 [2024-07-15 14:36:50.582017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.143 [2024-07-15 14:36:50.582029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.143 [2024-07-15 14:36:50.582046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.143 [2024-07-15 14:36:50.582067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.143 [2024-07-15 14:36:50.582083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.143 [2024-07-15 14:36:50.582097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.143 [2024-07-15 14:36:50.582118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.143 [2024-07-15 14:36:50.591597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.143 [2024-07-15 14:36:50.591743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.143 [2024-07-15 14:36:50.591766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.143 [2024-07-15 14:36:50.591777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.143 [2024-07-15 14:36:50.591796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.143 [2024-07-15 14:36:50.591810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.143 [2024-07-15 14:36:50.591819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.143 [2024-07-15 14:36:50.591828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.143 [2024-07-15 14:36:50.591842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.143 [2024-07-15 14:36:50.591970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.143 [2024-07-15 14:36:50.592052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.143 [2024-07-15 14:36:50.592073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.143 [2024-07-15 14:36:50.592084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.143 [2024-07-15 14:36:50.592108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.143 [2024-07-15 14:36:50.592126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.143 [2024-07-15 14:36:50.592135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.143 [2024-07-15 14:36:50.592145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.143 [2024-07-15 14:36:50.592159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.143 [2024-07-15 14:36:50.601690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.143 [2024-07-15 14:36:50.601844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.143 [2024-07-15 14:36:50.601867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.143 [2024-07-15 14:36:50.601878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.143 [2024-07-15 14:36:50.601896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.143 [2024-07-15 14:36:50.601911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.143 [2024-07-15 14:36:50.601919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.143 [2024-07-15 14:36:50.601929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.143 [2024-07-15 14:36:50.601945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.143 [2024-07-15 14:36:50.602009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.143 [2024-07-15 14:36:50.602081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.143 [2024-07-15 14:36:50.602106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.143 [2024-07-15 14:36:50.602121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.143 [2024-07-15 14:36:50.602143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.144 [2024-07-15 14:36:50.602162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.144 [2024-07-15 14:36:50.602172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.144 [2024-07-15 14:36:50.602181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.144 [2024-07-15 14:36:50.602199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.144 [2024-07-15 14:36:50.611781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.144 [2024-07-15 14:36:50.611887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.144 [2024-07-15 14:36:50.611906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.144 [2024-07-15 14:36:50.611916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.144 [2024-07-15 14:36:50.611931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.144 [2024-07-15 14:36:50.611955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.144 [2024-07-15 14:36:50.611965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.144 [2024-07-15 14:36:50.611974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.144 [2024-07-15 14:36:50.611988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.144 [2024-07-15 14:36:50.612038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.144 [2024-07-15 14:36:50.612126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.144 [2024-07-15 14:36:50.612152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.144 [2024-07-15 14:36:50.612168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.144 [2024-07-15 14:36:50.612192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.144 [2024-07-15 14:36:50.612210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.144 [2024-07-15 14:36:50.612221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.144 [2024-07-15 14:36:50.612231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.144 [2024-07-15 14:36:50.612252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.144 [2024-07-15 14:36:50.621855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.144 [2024-07-15 14:36:50.621966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.144 [2024-07-15 14:36:50.621986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.144 [2024-07-15 14:36:50.621996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.144 [2024-07-15 14:36:50.622012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.144 [2024-07-15 14:36:50.622026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.144 [2024-07-15 14:36:50.622034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.144 [2024-07-15 14:36:50.622043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.144 [2024-07-15 14:36:50.622057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.144 [2024-07-15 14:36:50.622083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.144 [2024-07-15 14:36:50.622136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.144 [2024-07-15 14:36:50.622154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.144 [2024-07-15 14:36:50.622179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.144 [2024-07-15 14:36:50.622199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.144 [2024-07-15 14:36:50.622233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.144 [2024-07-15 14:36:50.622248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.144 [2024-07-15 14:36:50.622263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.144 [2024-07-15 14:36:50.622283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.144 [2024-07-15 14:36:50.631933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.144 [2024-07-15 14:36:50.632067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.144 [2024-07-15 14:36:50.632088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.144 [2024-07-15 14:36:50.632101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.144 [2024-07-15 14:36:50.632128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.144 [2024-07-15 14:36:50.632155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.144 [2024-07-15 14:36:50.632165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.144 [2024-07-15 14:36:50.632174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.144 [2024-07-15 14:36:50.632188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.144 [2024-07-15 14:36:50.632200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.144 [2024-07-15 14:36:50.632255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.144 [2024-07-15 14:36:50.632273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.144 [2024-07-15 14:36:50.632287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.144 [2024-07-15 14:36:50.632311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.144 [2024-07-15 14:36:50.632333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.144 [2024-07-15 14:36:50.632346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.144 [2024-07-15 14:36:50.632360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.144 [2024-07-15 14:36:50.632380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.144 [2024-07-15 14:36:50.642027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.144 [2024-07-15 14:36:50.642135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.144 [2024-07-15 14:36:50.642155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.144 [2024-07-15 14:36:50.642165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.144 [2024-07-15 14:36:50.642182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.144 [2024-07-15 14:36:50.642197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.144 [2024-07-15 14:36:50.642206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.144 [2024-07-15 14:36:50.642216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.144 [2024-07-15 14:36:50.642230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.144 [2024-07-15 14:36:50.642252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.144 [2024-07-15 14:36:50.642307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.144 [2024-07-15 14:36:50.642334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.144 [2024-07-15 14:36:50.642361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.144 [2024-07-15 14:36:50.642377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.144 [2024-07-15 14:36:50.642391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.144 [2024-07-15 14:36:50.642400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.144 [2024-07-15 14:36:50.642409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.144 [2024-07-15 14:36:50.642423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.144 [2024-07-15 14:36:50.652104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.144 [2024-07-15 14:36:50.652223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.144 [2024-07-15 14:36:50.652243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.144 [2024-07-15 14:36:50.652254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.144 [2024-07-15 14:36:50.652270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.144 [2024-07-15 14:36:50.652287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.144 [2024-07-15 14:36:50.652296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.144 [2024-07-15 14:36:50.652305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.144 [2024-07-15 14:36:50.652325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.144 [2024-07-15 14:36:50.652339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.144 [2024-07-15 14:36:50.652394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.144 [2024-07-15 14:36:50.652412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.144 [2024-07-15 14:36:50.652422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.144 [2024-07-15 14:36:50.652437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.144 [2024-07-15 14:36:50.652451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.144 [2024-07-15 14:36:50.652459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.144 [2024-07-15 14:36:50.652468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.144 [2024-07-15 14:36:50.652481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.144 [2024-07-15 14:36:50.662186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.144 [2024-07-15 14:36:50.662305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.144 [2024-07-15 14:36:50.662336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.144 [2024-07-15 14:36:50.662365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.144 [2024-07-15 14:36:50.662382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.144 [2024-07-15 14:36:50.662408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.145 [2024-07-15 14:36:50.662419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.145 [2024-07-15 14:36:50.662428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.145 [2024-07-15 14:36:50.662443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.145 [2024-07-15 14:36:50.662455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.145 [2024-07-15 14:36:50.662510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.145 [2024-07-15 14:36:50.662528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.145 [2024-07-15 14:36:50.662538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.145 [2024-07-15 14:36:50.662554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.145 [2024-07-15 14:36:50.662568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.145 [2024-07-15 14:36:50.662577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.145 [2024-07-15 14:36:50.662586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.145 [2024-07-15 14:36:50.662600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.145 [2024-07-15 14:36:50.672268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.145 [2024-07-15 14:36:50.672407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.145 [2024-07-15 14:36:50.672429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.145 [2024-07-15 14:36:50.672441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.145 [2024-07-15 14:36:50.672463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.145 [2024-07-15 14:36:50.672480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.145 [2024-07-15 14:36:50.672489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.145 [2024-07-15 14:36:50.672498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.145 [2024-07-15 14:36:50.672519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.145 [2024-07-15 14:36:50.672534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.145 [2024-07-15 14:36:50.672591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.145 [2024-07-15 14:36:50.672610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.145 [2024-07-15 14:36:50.672621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.145 [2024-07-15 14:36:50.672637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.145 [2024-07-15 14:36:50.672650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.145 [2024-07-15 14:36:50.672659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.145 [2024-07-15 14:36:50.672668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.145 [2024-07-15 14:36:50.672683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.145 [2024-07-15 14:36:50.682369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.145 [2024-07-15 14:36:50.682479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.145 [2024-07-15 14:36:50.682500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.145 [2024-07-15 14:36:50.682512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.145 [2024-07-15 14:36:50.682528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.145 [2024-07-15 14:36:50.682545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.145 [2024-07-15 14:36:50.682563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.145 [2024-07-15 14:36:50.682572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.145 [2024-07-15 14:36:50.682587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.145 [2024-07-15 14:36:50.682610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.145 [2024-07-15 14:36:50.682667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.145 [2024-07-15 14:36:50.682686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.145 [2024-07-15 14:36:50.682714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.145 [2024-07-15 14:36:50.682734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.145 [2024-07-15 14:36:50.682748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.145 [2024-07-15 14:36:50.682758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.145 [2024-07-15 14:36:50.682767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.145 [2024-07-15 14:36:50.682781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.145 [2024-07-15 14:36:50.692439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.145 [2024-07-15 14:36:50.692539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.145 [2024-07-15 14:36:50.692560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.145 [2024-07-15 14:36:50.692571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.145 [2024-07-15 14:36:50.692588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.145 [2024-07-15 14:36:50.692603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.145 [2024-07-15 14:36:50.692612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.145 [2024-07-15 14:36:50.692621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.145 [2024-07-15 14:36:50.692639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.145 [2024-07-15 14:36:50.692662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.145 [2024-07-15 14:36:50.692734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.145 [2024-07-15 14:36:50.692754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.145 [2024-07-15 14:36:50.692765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.145 [2024-07-15 14:36:50.692781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.145 [2024-07-15 14:36:50.692796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.145 [2024-07-15 14:36:50.692805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.145 [2024-07-15 14:36:50.692814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.145 [2024-07-15 14:36:50.692828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.145 [2024-07-15 14:36:50.695561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.145 [2024-07-15 14:36:50.695601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d1410 with addr=10.0.0.3, port=8009 00:20:11.145 [2024-07-15 14:36:50.695620] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:11.145 [2024-07-15 14:36:50.695630] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:11.145 [2024-07-15 14:36:50.695640] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.3:8009] could not start discovery connect 00:20:11.145 [2024-07-15 14:36:50.695752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.145 [2024-07-15 14:36:50.695773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d1410 with addr=10.0.0.2, port=8009 00:20:11.145 [2024-07-15 14:36:50.695786] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:11.145 [2024-07-15 14:36:50.695794] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:11.145 [2024-07-15 14:36:50.695803] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8009] could not start discovery connect 00:20:11.145 [2024-07-15 14:36:50.702508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.145 [2024-07-15 14:36:50.702607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.145 [2024-07-15 14:36:50.702628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.145 [2024-07-15 14:36:50.702639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.145 [2024-07-15 14:36:50.702656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.145 [2024-07-15 14:36:50.702674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.145 [2024-07-15 14:36:50.702683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.145 [2024-07-15 14:36:50.702692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.146 [2024-07-15 14:36:50.702729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.146 [2024-07-15 14:36:50.702754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.146 [2024-07-15 14:36:50.702814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.146 [2024-07-15 14:36:50.702832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.146 [2024-07-15 14:36:50.702842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.146 [2024-07-15 14:36:50.702858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.146 [2024-07-15 14:36:50.702872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.146 [2024-07-15 14:36:50.702881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.146 [2024-07-15 14:36:50.702889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.146 [2024-07-15 14:36:50.702903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.146 [2024-07-15 14:36:50.712573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.146 [2024-07-15 14:36:50.712689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.146 [2024-07-15 14:36:50.712727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.146 [2024-07-15 14:36:50.712740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.146 [2024-07-15 14:36:50.712757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.146 [2024-07-15 14:36:50.712775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.146 [2024-07-15 14:36:50.712784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.146 [2024-07-15 14:36:50.712793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.146 [2024-07-15 14:36:50.712814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.146 [2024-07-15 14:36:50.712830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.146 [2024-07-15 14:36:50.712887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.146 [2024-07-15 14:36:50.712906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.146 [2024-07-15 14:36:50.712916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.146 [2024-07-15 14:36:50.712932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.146 [2024-07-15 14:36:50.712946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.146 [2024-07-15 14:36:50.712955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.146 [2024-07-15 14:36:50.712964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.146 [2024-07-15 14:36:50.712978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.146 [2024-07-15 14:36:50.722640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.146 [2024-07-15 14:36:50.722748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.146 [2024-07-15 14:36:50.722770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.146 [2024-07-15 14:36:50.722781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.146 [2024-07-15 14:36:50.722798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.146 [2024-07-15 14:36:50.722814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.146 [2024-07-15 14:36:50.722824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.146 [2024-07-15 14:36:50.722833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.146 [2024-07-15 14:36:50.722848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.146 [2024-07-15 14:36:50.722873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.146 [2024-07-15 14:36:50.722933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.146 [2024-07-15 14:36:50.722952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.146 [2024-07-15 14:36:50.722962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.146 [2024-07-15 14:36:50.722977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.146 [2024-07-15 14:36:50.722991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.146 [2024-07-15 14:36:50.723000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.146 [2024-07-15 14:36:50.723010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.146 [2024-07-15 14:36:50.723030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.407 [2024-07-15 14:36:50.732701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.407 [2024-07-15 14:36:50.732822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.407 [2024-07-15 14:36:50.732843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.407 [2024-07-15 14:36:50.732854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.407 [2024-07-15 14:36:50.732871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.407 [2024-07-15 14:36:50.732889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.407 [2024-07-15 14:36:50.732899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.407 [2024-07-15 14:36:50.732908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.407 [2024-07-15 14:36:50.732923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.407 [2024-07-15 14:36:50.732946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.407 [2024-07-15 14:36:50.733004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.407 [2024-07-15 14:36:50.733023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.407 [2024-07-15 14:36:50.733034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.407 [2024-07-15 14:36:50.733049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.407 [2024-07-15 14:36:50.733063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.407 [2024-07-15 14:36:50.733073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.407 [2024-07-15 14:36:50.733082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.407 [2024-07-15 14:36:50.733096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.407 [2024-07-15 14:36:50.742773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.407 [2024-07-15 14:36:50.742893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.407 [2024-07-15 14:36:50.742915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.407 [2024-07-15 14:36:50.742926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.407 [2024-07-15 14:36:50.742944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.407 [2024-07-15 14:36:50.742961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.407 [2024-07-15 14:36:50.742971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.407 [2024-07-15 14:36:50.742980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.407 [2024-07-15 14:36:50.742995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.407 [2024-07-15 14:36:50.743018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.407 [2024-07-15 14:36:50.743075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.407 [2024-07-15 14:36:50.743093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.407 [2024-07-15 14:36:50.743103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.407 [2024-07-15 14:36:50.743119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.407 [2024-07-15 14:36:50.743133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.407 [2024-07-15 14:36:50.743142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.407 [2024-07-15 14:36:50.743151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.407 [2024-07-15 14:36:50.743165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.407 [2024-07-15 14:36:50.751068] thread.c: 639:thread_exit: *ERROR*: thread app_thread got timeout, and move it to the exited state forcefully 00:20:11.407 [2024-07-15 14:36:50.751203] thread.c: 386:_free_thread: *WARNING*: timed_poller nvmf_avahi_publish_iterate still registered at thread exit 00:20:11.407 [2024-07-15 14:36:50.752840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.407 [2024-07-15 14:36:50.752937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.407 [2024-07-15 14:36:50.752959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.407 [2024-07-15 14:36:50.752969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.407 [2024-07-15 14:36:50.752986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.407 [2024-07-15 14:36:50.753001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.407 [2024-07-15 14:36:50.753010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.407 [2024-07-15 14:36:50.753020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.407 [2024-07-15 14:36:50.753037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.407 [2024-07-15 14:36:50.753060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.407 [2024-07-15 14:36:50.753118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.407 [2024-07-15 14:36:50.753137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.407 [2024-07-15 14:36:50.753147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.407 [2024-07-15 14:36:50.753163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.407 [2024-07-15 14:36:50.753177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.407 [2024-07-15 14:36:50.753192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.407 [2024-07-15 14:36:50.753201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.407 [2024-07-15 14:36:50.753216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.407 [2024-07-15 14:36:50.762908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.407 [2024-07-15 14:36:50.763031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.407 [2024-07-15 14:36:50.763054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.407 [2024-07-15 14:36:50.763065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.407 [2024-07-15 14:36:50.763083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.407 [2024-07-15 14:36:50.763110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.407 [2024-07-15 14:36:50.763121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.407 [2024-07-15 14:36:50.763131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.407 [2024-07-15 14:36:50.763145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.407 [2024-07-15 14:36:50.763157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.407 [2024-07-15 14:36:50.763212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.407 [2024-07-15 14:36:50.763230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.407 [2024-07-15 14:36:50.763241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.407 [2024-07-15 14:36:50.763256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.407 [2024-07-15 14:36:50.763271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.407 [2024-07-15 14:36:50.763280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.407 [2024-07-15 14:36:50.763289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.407 [2024-07-15 14:36:50.763303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.407 [2024-07-15 14:36:50.772987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.407 [2024-07-15 14:36:50.773157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.407 [2024-07-15 14:36:50.773179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.407 [2024-07-15 14:36:50.773190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.407 [2024-07-15 14:36:50.773211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.407 [2024-07-15 14:36:50.773237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.407 [2024-07-15 14:36:50.773248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.407 [2024-07-15 14:36:50.773258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.407 [2024-07-15 14:36:50.773273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.407 [2024-07-15 14:36:50.773285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.407 [2024-07-15 14:36:50.773340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.407 [2024-07-15 14:36:50.773358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.407 [2024-07-15 14:36:50.773368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.407 [2024-07-15 14:36:50.773384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.407 [2024-07-15 14:36:50.773399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.407 [2024-07-15 14:36:50.773408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.407 [2024-07-15 14:36:50.773417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.407 [2024-07-15 14:36:50.773432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.407 [2024-07-15 14:36:50.783127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.407 [2024-07-15 14:36:50.783330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.407 [2024-07-15 14:36:50.783355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.407 [2024-07-15 14:36:50.783368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.407 [2024-07-15 14:36:50.783392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.407 [2024-07-15 14:36:50.783421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.407 [2024-07-15 14:36:50.783431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.407 [2024-07-15 14:36:50.783442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.407 [2024-07-15 14:36:50.783458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.407 [2024-07-15 14:36:50.783471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.407 [2024-07-15 14:36:50.783527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.407 [2024-07-15 14:36:50.783545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.407 [2024-07-15 14:36:50.783556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.407 [2024-07-15 14:36:50.783572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.407 [2024-07-15 14:36:50.783587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.407 [2024-07-15 14:36:50.783596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.407 [2024-07-15 14:36:50.783605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.407 [2024-07-15 14:36:50.783620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.407 [2024-07-15 14:36:50.793242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.408 [2024-07-15 14:36:50.793363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.408 [2024-07-15 14:36:50.793386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.408 [2024-07-15 14:36:50.793398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.408 [2024-07-15 14:36:50.793417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.408 [2024-07-15 14:36:50.793436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.408 [2024-07-15 14:36:50.793445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.408 [2024-07-15 14:36:50.793455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.408 [2024-07-15 14:36:50.793470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.408 [2024-07-15 14:36:50.793505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.408 [2024-07-15 14:36:50.793574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.408 [2024-07-15 14:36:50.793593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.408 [2024-07-15 14:36:50.793604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.408 [2024-07-15 14:36:50.793626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.408 [2024-07-15 14:36:50.793647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.408 [2024-07-15 14:36:50.793657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.408 [2024-07-15 14:36:50.793666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.408 [2024-07-15 14:36:50.793681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.408 [2024-07-15 14:36:50.803323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.408 [2024-07-15 14:36:50.803457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.408 [2024-07-15 14:36:50.803480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.408 [2024-07-15 14:36:50.803491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.408 [2024-07-15 14:36:50.803508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.408 [2024-07-15 14:36:50.803526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.408 [2024-07-15 14:36:50.803535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.408 [2024-07-15 14:36:50.803545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.408 [2024-07-15 14:36:50.803560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.408 [2024-07-15 14:36:50.803583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.408 [2024-07-15 14:36:50.803641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.408 [2024-07-15 14:36:50.803660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.408 [2024-07-15 14:36:50.803670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.408 [2024-07-15 14:36:50.803686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.408 [2024-07-15 14:36:50.803722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.408 [2024-07-15 14:36:50.803734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.408 [2024-07-15 14:36:50.803744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.408 [2024-07-15 14:36:50.803759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.408 [2024-07-15 14:36:50.813429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.408 [2024-07-15 14:36:50.813616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.408 [2024-07-15 14:36:50.813640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.408 [2024-07-15 14:36:50.813655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.408 [2024-07-15 14:36:50.813676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.408 [2024-07-15 14:36:50.813729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.408 [2024-07-15 14:36:50.813742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.408 [2024-07-15 14:36:50.813753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.408 [2024-07-15 14:36:50.813768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.408 [2024-07-15 14:36:50.813781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.408 [2024-07-15 14:36:50.813839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.408 [2024-07-15 14:36:50.813858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.408 [2024-07-15 14:36:50.813868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.408 [2024-07-15 14:36:50.813884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.408 [2024-07-15 14:36:50.813899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.408 [2024-07-15 14:36:50.813908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.408 [2024-07-15 14:36:50.813917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.408 [2024-07-15 14:36:50.813931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.408 [2024-07-15 14:36:50.823542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.408 [2024-07-15 14:36:50.823672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.408 [2024-07-15 14:36:50.823709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.408 [2024-07-15 14:36:50.823724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.408 [2024-07-15 14:36:50.823744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.408 [2024-07-15 14:36:50.823762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.408 [2024-07-15 14:36:50.823771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.408 [2024-07-15 14:36:50.823780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.408 [2024-07-15 14:36:50.823796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.408 [2024-07-15 14:36:50.823823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.408 [2024-07-15 14:36:50.823895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.408 [2024-07-15 14:36:50.823918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.408 [2024-07-15 14:36:50.823929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.408 [2024-07-15 14:36:50.823946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.408 [2024-07-15 14:36:50.823960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.408 [2024-07-15 14:36:50.823969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.408 [2024-07-15 14:36:50.823978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.408 [2024-07-15 14:36:50.823998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.408 [2024-07-15 14:36:50.833616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.408 [2024-07-15 14:36:50.833720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.408 [2024-07-15 14:36:50.833743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.408 [2024-07-15 14:36:50.833754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.408 [2024-07-15 14:36:50.833772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.408 [2024-07-15 14:36:50.833788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.408 [2024-07-15 14:36:50.833797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.408 [2024-07-15 14:36:50.833806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.408 [2024-07-15 14:36:50.833821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.408 [2024-07-15 14:36:50.833855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.408 [2024-07-15 14:36:50.833915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.408 [2024-07-15 14:36:50.833936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.408 [2024-07-15 14:36:50.833952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.408 [2024-07-15 14:36:50.833975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.408 [2024-07-15 14:36:50.833990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.408 [2024-07-15 14:36:50.834000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.408 [2024-07-15 14:36:50.834008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.408 [2024-07-15 14:36:50.834023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.408 [2024-07-15 14:36:50.843681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.408 [2024-07-15 14:36:50.843807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.408 [2024-07-15 14:36:50.843830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.408 [2024-07-15 14:36:50.843842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.408 [2024-07-15 14:36:50.843861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.408 [2024-07-15 14:36:50.843879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.408 [2024-07-15 14:36:50.843889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.408 [2024-07-15 14:36:50.843899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.408 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:11.408 [2024-07-15 14:36:50.843926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.409 [2024-07-15 14:36:50.843950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.409 [2024-07-15 14:36:50.844022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.409 [2024-07-15 14:36:50.844047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.409 [2024-07-15 14:36:50.844064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.409 [2024-07-15 14:36:50.844088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.409 [2024-07-15 14:36:50.844117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.409 [2024-07-15 14:36:50.844128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.409 [2024-07-15 14:36:50.844137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.409 [2024-07-15 14:36:50.844153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.409 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:11.409 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:11.409 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.409 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.409 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.409 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.409 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.409 [2024-07-15 14:36:50.853760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.409 [2024-07-15 14:36:50.853861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.409 [2024-07-15 14:36:50.853882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.409 [2024-07-15 14:36:50.853893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.409 [2024-07-15 14:36:50.853910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.409 [2024-07-15 14:36:50.853928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.409 [2024-07-15 14:36:50.853937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.409 [2024-07-15 14:36:50.853946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.409 [2024-07-15 14:36:50.853961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.409 [2024-07-15 14:36:50.853990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.409 [2024-07-15 14:36:50.854071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.409 [2024-07-15 14:36:50.854097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.409 [2024-07-15 14:36:50.854109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.409 [2024-07-15 14:36:50.854127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.409 [2024-07-15 14:36:50.854160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.409 [2024-07-15 14:36:50.854176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.409 [2024-07-15 14:36:50.854191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.409 [2024-07-15 14:36:50.854207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.409 [2024-07-15 14:36:50.863828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.409 [2024-07-15 14:36:50.863941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.409 [2024-07-15 14:36:50.863963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.409 [2024-07-15 14:36:50.863974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.409 [2024-07-15 14:36:50.863992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.409 [2024-07-15 14:36:50.864010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.409 [2024-07-15 14:36:50.864020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.409 [2024-07-15 14:36:50.864029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.409 [2024-07-15 14:36:50.864044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.409 [2024-07-15 14:36:50.864081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.409 [2024-07-15 14:36:50.864153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.409 [2024-07-15 14:36:50.864172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.409 [2024-07-15 14:36:50.864182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.409 [2024-07-15 14:36:50.864208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.409 [2024-07-15 14:36:50.864228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.409 [2024-07-15 14:36:50.864238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.409 [2024-07-15 14:36:50.864247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.409 [2024-07-15 14:36:50.864261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.409 [2024-07-15 14:36:50.873897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:11.409 [2024-07-15 14:36:50.874012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.409 [2024-07-15 14:36:50.874035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e53a0 with addr=10.0.0.2, port=4420 00:20:11.409 [2024-07-15 14:36:50.874046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e53a0 is same with the state(5) to be set 00:20:11.409 [2024-07-15 14:36:50.874063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e53a0 (9): Bad file descriptor 00:20:11.409 [2024-07-15 14:36:50.874080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:11.409 [2024-07-15 14:36:50.874089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:11.409 [2024-07-15 14:36:50.874099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:11.409 [2024-07-15 14:36:50.874114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.409 [2024-07-15 14:36:50.874136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:20:11.409 [2024-07-15 14:36:50.874194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.409 [2024-07-15 14:36:50.874213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x169e360 with addr=10.0.0.3, port=4420 00:20:11.409 [2024-07-15 14:36:50.874223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169e360 is same with the state(5) to be set 00:20:11.409 [2024-07-15 14:36:50.874238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e360 (9): Bad file descriptor 00:20:11.409 [2024-07-15 14:36:50.874253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:20:11.409 [2024-07-15 14:36:50.874263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:20:11.409 [2024-07-15 14:36:50.874272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:20:11.409 [2024-07-15 14:36:50.874286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.409 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:11.409 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@1 -- # kill 94107 00:20:11.409 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@1 -- # kill 94124 00:20:11.409 Got SIGTERM, quitting. 00:20:11.409 00:20:11.409 real 0m13.558s 00:20:11.409 user 0m13.985s 00:20:11.409 sys 0m0.913s 00:20:11.409 ************************************ 00:20:11.409 END TEST nvmf_mdns_discovery 00:20:11.409 ************************************ 00:20:11.409 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # es=1 00:20:11.409 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:11.409 14:36:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.409 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:20:11.409 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:20:11.409 avahi-daemon 0.8 exiting. 00:20:11.409 14:36:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 1 00:20:11.409 14:36:50 nvmf_tcp -- nvmf/nvmf.sh@113 -- # trap - ERR 00:20:11.409 14:36:50 nvmf_tcp -- nvmf/nvmf.sh@113 -- # print_backtrace 00:20:11.409 14:36:50 nvmf_tcp -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:20:11.409 14:36:50 nvmf_tcp -- common/autotest_common.sh@1155 -- # args=('--transport=tcp') 00:20:11.409 14:36:50 nvmf_tcp -- common/autotest_common.sh@1155 -- # local args 00:20:11.409 14:36:50 nvmf_tcp -- common/autotest_common.sh@1157 -- # xtrace_disable 00:20:11.409 14:36:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:11.409 ========== Backtrace start: ========== 00:20:11.409 00:20:11.409 in /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh:113 -> main(["--transport=tcp"]) 00:20:11.409 ... 00:20:11.409 108 run_test "nvmf_digest" "$rootdir/test/nvmf/host/digest.sh" "${TEST_ARGS[@]}" 00:20:11.409 109 fi 00:20:11.409 110 00:20:11.409 111 if [[ $SPDK_TEST_NVMF_MDNS -eq 1 && "$SPDK_TEST_NVMF_TRANSPORT" == "tcp" ]]; then 00:20:11.409 112 # Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:20:11.409 => 113 run_test "nvmf_mdns_discovery" $rootdir/test/nvmf/host/mdns_discovery.sh "${TEST_ARGS[@]}" 00:20:11.409 114 fi 00:20:11.409 115 00:20:11.409 116 if [[ $SPDK_TEST_USDT -eq 1 ]]; then 00:20:11.409 117 run_test "nvmf_host_multipath" $rootdir/test/nvmf/host/multipath.sh "${TEST_ARGS[@]}" 00:20:11.409 118 run_test "nvmf_timeout" $rootdir/test/nvmf/host/timeout.sh "${TEST_ARGS[@]}" 00:20:11.409 ... 00:20:11.409 00:20:11.409 ========== Backtrace end ========== 00:20:11.409 14:36:50 nvmf_tcp -- common/autotest_common.sh@1194 -- # return 0 00:20:11.409 14:36:50 nvmf_tcp -- nvmf/nvmf.sh@1 -- # exit 1 00:20:11.409 ************************************ 00:20:11.409 END TEST nvmf_tcp 00:20:11.409 ************************************ 00:20:11.409 00:20:11.409 real 13m33.488s 00:20:11.409 user 35m45.352s 00:20:11.409 sys 2m54.165s 00:20:11.409 14:36:50 nvmf_tcp -- common/autotest_common.sh@1123 -- # es=1 00:20:11.409 14:36:50 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:11.409 14:36:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:11.409 14:36:50 -- common/autotest_common.sh@1142 -- # return 1 00:20:11.409 14:36:50 -- spdk/autotest.sh@287 -- # trap - ERR 00:20:11.409 14:36:50 -- spdk/autotest.sh@287 -- # print_backtrace 00:20:11.409 14:36:50 -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:20:11.410 14:36:50 -- common/autotest_common.sh@1155 -- # args=('/home/vagrant/spdk_repo/autorun-spdk.conf') 00:20:11.410 14:36:50 -- common/autotest_common.sh@1155 -- # local args 00:20:11.410 14:36:50 -- common/autotest_common.sh@1157 -- # xtrace_disable 00:20:11.410 14:36:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.410 ========== Backtrace start: ========== 00:20:11.410 00:20:11.410 in /home/vagrant/spdk_repo/spdk/autotest.sh:287 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:20:11.410 ... 00:20:11.410 282 # list of all tests can properly differentiate them. Please do not merge them into one line. 00:20:11.410 283 if [ "$SPDK_TEST_NVMF_TRANSPORT" = "rdma" ]; then 00:20:11.410 284 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:11.410 285 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:11.410 286 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:20:11.410 => 287 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:11.410 288 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:20:11.410 289 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:11.410 290 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:20:11.410 291 fi 00:20:11.410 292 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:20:11.410 ... 00:20:11.410 00:20:11.410 ========== Backtrace end ========== 00:20:11.410 14:36:50 -- common/autotest_common.sh@1194 -- # return 0 00:20:11.410 14:36:50 -- spdk/autotest.sh@1 -- # autotest_cleanup 00:20:11.410 14:36:50 -- common/autotest_common.sh@1392 -- # local autotest_es=1 00:20:11.410 14:36:50 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:11.410 14:36:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.667 [2024-07-15 14:36:51.131953] bdev_mdns_client.c: 413:client_handler: *ERROR*: Server connection failure: Daemon connection failed 00:20:18.266 [2024-07-15 14:36:56.750994] thread.c: 639:thread_exit: *ERROR*: thread app_thread got timeout, and move it to the exited state forcefully 00:20:18.266 [2024-07-15 14:36:56.751285] thread.c: 386:_free_thread: *WARNING*: timed_poller bdev_nvme_avahi_iterate still registered at thread exit 00:20:23.526 INFO: APP EXITING 00:20:23.526 INFO: killing all VMs 00:20:23.526 INFO: killing vhost app 00:20:23.527 INFO: EXIT DONE 00:20:23.527 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:23.527 Waiting for block devices as requested 00:20:23.527 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:23.527 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:24.465 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:24.465 Cleaning 00:20:24.465 Removing: /var/run/dpdk/spdk0/config 00:20:24.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:24.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:24.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:24.466 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:24.466 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:24.466 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:24.466 Removing: /var/run/dpdk/spdk1/config 00:20:24.466 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:24.466 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:24.466 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:24.466 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:24.466 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:24.466 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:24.466 Removing: /var/run/dpdk/spdk2/config 00:20:24.466 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:24.466 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:24.466 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:24.466 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:24.466 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:24.466 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:24.466 Removing: /var/run/dpdk/spdk3/config 00:20:24.466 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:24.466 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:24.466 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:24.466 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:24.466 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:24.466 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:24.466 Removing: /var/run/dpdk/spdk4/config 00:20:24.466 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:24.466 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:24.466 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:24.466 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:24.466 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:24.466 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:24.466 Removing: /dev/shm/nvmf_trace.0 00:20:24.466 Removing: /dev/shm/spdk_tgt_trace.pid60760 00:20:24.466 Removing: /var/run/dpdk/spdk0 00:20:24.466 Removing: /var/run/dpdk/spdk1 00:20:24.466 Removing: /var/run/dpdk/spdk2 00:20:24.466 Removing: /var/run/dpdk/spdk3 00:20:24.466 Removing: /var/run/dpdk/spdk4 00:20:24.466 Removing: /var/run/dpdk/spdk_pid60620 00:20:24.466 Removing: /var/run/dpdk/spdk_pid60760 00:20:24.466 Removing: /var/run/dpdk/spdk_pid61003 00:20:24.466 Removing: /var/run/dpdk/spdk_pid61090 00:20:24.466 Removing: /var/run/dpdk/spdk_pid61135 00:20:24.466 Removing: /var/run/dpdk/spdk_pid61239 00:20:24.466 Removing: /var/run/dpdk/spdk_pid61261 00:20:24.466 Removing: /var/run/dpdk/spdk_pid61379 00:20:24.466 Removing: /var/run/dpdk/spdk_pid61649 00:20:24.466 Removing: /var/run/dpdk/spdk_pid61825 00:20:24.466 Removing: /var/run/dpdk/spdk_pid61907 00:20:24.466 Removing: /var/run/dpdk/spdk_pid61994 00:20:24.466 Removing: /var/run/dpdk/spdk_pid62083 00:20:24.466 Removing: /var/run/dpdk/spdk_pid62122 00:20:24.466 Removing: /var/run/dpdk/spdk_pid62152 00:20:24.466 Removing: /var/run/dpdk/spdk_pid62213 00:20:24.466 Removing: /var/run/dpdk/spdk_pid62331 00:20:24.466 Removing: /var/run/dpdk/spdk_pid62942 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63006 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63075 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63084 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63163 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63172 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63252 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63261 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63318 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63348 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63394 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63405 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63553 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63588 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63663 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63731 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63751 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63816 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63846 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63881 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63915 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63951 00:20:24.466 Removing: /var/run/dpdk/spdk_pid63980 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64015 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64052 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64081 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64121 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64150 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64179 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64219 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64248 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64287 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64317 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64346 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64389 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64421 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64456 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64491 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64556 00:20:24.466 Removing: /var/run/dpdk/spdk_pid64648 00:20:24.466 Removing: /var/run/dpdk/spdk_pid65062 00:20:24.466 Removing: /var/run/dpdk/spdk_pid68333 00:20:24.466 Removing: /var/run/dpdk/spdk_pid68667 00:20:24.466 Removing: /var/run/dpdk/spdk_pid71100 00:20:24.466 Removing: /var/run/dpdk/spdk_pid71475 00:20:24.723 Removing: /var/run/dpdk/spdk_pid71740 00:20:24.723 Removing: /var/run/dpdk/spdk_pid71785 00:20:24.723 Removing: /var/run/dpdk/spdk_pid72407 00:20:24.723 Removing: /var/run/dpdk/spdk_pid72804 00:20:24.723 Removing: /var/run/dpdk/spdk_pid72854 00:20:24.723 Removing: /var/run/dpdk/spdk_pid73210 00:20:24.723 Removing: /var/run/dpdk/spdk_pid73722 00:20:24.723 Removing: /var/run/dpdk/spdk_pid74177 00:20:24.723 Removing: /var/run/dpdk/spdk_pid75136 00:20:24.723 Removing: /var/run/dpdk/spdk_pid76092 00:20:24.723 Removing: /var/run/dpdk/spdk_pid76210 00:20:24.723 Removing: /var/run/dpdk/spdk_pid76274 00:20:24.723 Removing: /var/run/dpdk/spdk_pid77724 00:20:24.723 Removing: /var/run/dpdk/spdk_pid77939 00:20:24.723 Removing: /var/run/dpdk/spdk_pid83295 00:20:24.723 Removing: /var/run/dpdk/spdk_pid83734 00:20:24.723 Removing: /var/run/dpdk/spdk_pid83841 00:20:24.723 Removing: /var/run/dpdk/spdk_pid83993 00:20:24.723 Removing: /var/run/dpdk/spdk_pid84019 00:20:24.723 Removing: /var/run/dpdk/spdk_pid84065 00:20:24.723 Removing: /var/run/dpdk/spdk_pid84097 00:20:24.723 Removing: /var/run/dpdk/spdk_pid84255 00:20:24.723 Removing: /var/run/dpdk/spdk_pid84402 00:20:24.723 Removing: /var/run/dpdk/spdk_pid84667 00:20:24.723 Removing: /var/run/dpdk/spdk_pid84790 00:20:24.723 Removing: /var/run/dpdk/spdk_pid85041 00:20:24.723 Removing: /var/run/dpdk/spdk_pid85165 00:20:24.723 Removing: /var/run/dpdk/spdk_pid85305 00:20:24.723 Removing: /var/run/dpdk/spdk_pid85642 00:20:24.724 Removing: /var/run/dpdk/spdk_pid86038 00:20:24.724 Removing: /var/run/dpdk/spdk_pid86350 00:20:24.724 Removing: /var/run/dpdk/spdk_pid86827 00:20:24.724 Removing: /var/run/dpdk/spdk_pid86835 00:20:24.724 Removing: /var/run/dpdk/spdk_pid87171 00:20:24.724 Removing: /var/run/dpdk/spdk_pid87191 00:20:24.724 Removing: /var/run/dpdk/spdk_pid87205 00:20:24.724 Removing: /var/run/dpdk/spdk_pid87230 00:20:24.724 Removing: /var/run/dpdk/spdk_pid87246 00:20:24.724 Removing: /var/run/dpdk/spdk_pid87597 00:20:24.724 Removing: /var/run/dpdk/spdk_pid87640 00:20:24.724 Removing: /var/run/dpdk/spdk_pid87976 00:20:24.724 Removing: /var/run/dpdk/spdk_pid88227 00:20:24.724 Removing: /var/run/dpdk/spdk_pid88691 00:20:24.724 Removing: /var/run/dpdk/spdk_pid89279 00:20:24.724 Removing: /var/run/dpdk/spdk_pid90632 00:20:24.724 Removing: /var/run/dpdk/spdk_pid91212 00:20:24.724 Removing: /var/run/dpdk/spdk_pid91214 00:20:24.724 Removing: /var/run/dpdk/spdk_pid93154 00:20:24.724 Removing: /var/run/dpdk/spdk_pid93243 00:20:24.724 Removing: /var/run/dpdk/spdk_pid93329 00:20:24.724 Removing: /var/run/dpdk/spdk_pid93406 00:20:24.724 Removing: /var/run/dpdk/spdk_pid93550 00:20:24.724 Removing: /var/run/dpdk/spdk_pid93635 00:20:24.724 Removing: /var/run/dpdk/spdk_pid93725 00:20:24.724 Removing: /var/run/dpdk/spdk_pid93796 00:20:24.724 Removing: /var/run/dpdk/spdk_pid94107 00:20:24.724 Clean 00:20:31.291 14:37:09 -- common/autotest_common.sh@1451 -- # return 1 00:20:31.291 14:37:09 -- spdk/autotest.sh@1 -- # : 00:20:31.291 14:37:09 -- spdk/autotest.sh@1 -- # exit 1 00:20:31.303 [Pipeline] } 00:20:31.351 [Pipeline] // timeout 00:20:31.359 [Pipeline] } 00:20:31.382 [Pipeline] // stage 00:20:31.391 [Pipeline] } 00:20:31.397 ERROR: script returned exit code 1 00:20:31.397 Setting overall build result to FAILURE 00:20:31.419 [Pipeline] // catchError 00:20:31.429 [Pipeline] stage 00:20:31.431 [Pipeline] { (Stop VM) 00:20:31.447 [Pipeline] sh 00:20:31.729 + vagrant halt 00:20:35.980 ==> default: Halting domain... 00:20:41.257 [Pipeline] sh 00:20:41.534 + vagrant destroy -f 00:20:45.721 ==> default: Removing domain... 00:20:45.732 [Pipeline] sh 00:20:46.011 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:20:46.020 [Pipeline] } 00:20:46.036 [Pipeline] // stage 00:20:46.042 [Pipeline] } 00:20:46.060 [Pipeline] // dir 00:20:46.065 [Pipeline] } 00:20:46.080 [Pipeline] // wrap 00:20:46.084 [Pipeline] } 00:20:46.097 [Pipeline] // catchError 00:20:46.105 [Pipeline] stage 00:20:46.107 [Pipeline] { (Epilogue) 00:20:46.123 [Pipeline] sh 00:20:46.403 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:48.317 [Pipeline] catchError 00:20:48.319 [Pipeline] { 00:20:48.339 [Pipeline] sh 00:20:48.623 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:48.624 Artifacts sizes are good 00:20:48.632 [Pipeline] } 00:20:48.652 [Pipeline] // catchError 00:20:48.664 [Pipeline] archiveArtifacts 00:20:48.671 Archiving artifacts 00:20:48.912 [Pipeline] cleanWs 00:20:48.922 [WS-CLEANUP] Deleting project workspace... 00:20:48.923 [WS-CLEANUP] Deferred wipeout is used... 00:20:48.929 [WS-CLEANUP] done 00:20:48.930 [Pipeline] } 00:20:48.947 [Pipeline] // stage 00:20:48.953 [Pipeline] } 00:20:48.969 [Pipeline] // node 00:20:48.973 [Pipeline] End of Pipeline 00:20:49.081 Finished: FAILURE